Sample records for ratio lasnr algorithm

  1. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  2. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  3. Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation

    NASA Technical Reports Server (NTRS)

    Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.

  4. Detection of algorithmic trading

    NASA Astrophysics Data System (ADS)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  5. Determination of water depth with high-resolution satellite imagery over variable bottom types

    USGS Publications Warehouse

    Stumpf, Richard P.; Holderied, Kristine; Sinclair, Mark

    2003-01-01

    A standard algorithm for determining depth in clear water from passive sensors exists; but it requires tuning of five parameters and does not retrieve depths where the bottom has an extremely low albedo. To address these issues, we developed an empirical solution using a ratio of reflectances that has only two tunable parameters and can be applied to low-albedo features. The two algorithms--the standard linear transform and the new ratio transform--were compared through analysis of IKONOS satellite imagery against lidar bathymetry. The coefficients for the ratio algorithm were tuned manually to a few depths from a nautical chart, yet performed as well as the linear algorithm tuned using multiple linear regression against the lidar. Both algorithms compensate for variable bottom type and albedo (sand, pavement, algae, coral) and retrieve bathymetry in water depths of less than 10-15 m. However, the linear transform does not distinguish depths >15 m and is more subject to variability across the studied atolls. The ratio transform can, in clear water, retrieve depths in >25 m of water and shows greater stability between different areas. It also performs slightly better in scattering turbidity than the linear transform. The ratio algorithm is somewhat noisier and cannot always adequately resolve fine morphology (structures smaller than 4-5 pixels) in water depths >15-20 m. In general, the ratio transform is more robust than the linear transform.

  6. Statistical methods to enhance reporting of Aboriginal Australians in routine hospital records using data linkage affect estimates of health disparities.

    PubMed

    Randall, Deborah A; Lujic, Sanja; Leyland, Alastair H; Jorm, Louisa R

    2013-10-01

    To investigate under-recording of Aboriginal people in hospital data from New South Wales (NSW), Australia, define algorithms for enhanced reporting, and examine the impact of these algorithms on estimated disparities in cardiovascular and injury outcomes. NSW Admitted Patient Data were linked with NSW mortality data (2001-2007). Associations with recording of Aboriginal status were investigated using multilevel logistic regression. The number of admissions reported as Aboriginal according to six algorithms was compared with the original (unenhanced) Aboriginal status variable. Age-standardised admission, and 30- and 365-day mortality ratios were estimated for cardiovascular disease and injury. Sixty per cent of the variation in recording of Aboriginal status was due to the hospital of admission, with poorer recording in private and major city hospitals. All enhancement algorithms increased the number of admissions reported as Aboriginal, from between 4.1% and 37.8%. Admission and mortality ratios varied markedly between algorithms, with less strict algorithms resulting in higher admission rate ratios, but generally lower mortality rate ratios, particularly for cardiovascular disease. The choice of enhancement algorithm has an impact on the number of people reported as Aboriginal and on estimated outcome ratios. The influence of the hospital on recording of Aboriginal status highlights the importance of continued efforts to improve data collection. Estimates of Aboriginal health disparity can change depending on how Aboriginal status is reported. Sensitivity analyses using a number of algorithms are recommended. © 2013 The Authors. ANZJPH © 2013 Public Health Association of Australia.

  7. Compressive Sensing of Foot Gait Signals and Its Application for the Estimation of Clinically Relevant Time Series.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2016-07-01

    A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals.

  8. [A new peak detection algorithm of Raman spectra].

    PubMed

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  9. An Improved Scheduling Algorithm for Data Transmission in Ultrasonic Phased Arrays with Multi-Group Ultrasonic Sensors

    PubMed Central

    Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji

    2017-01-01

    High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345

  10. An enhanced VIIRS aerosol optical thickness (AOT) retrieval algorithm over land using a global surface reflectance ratio database

    NASA Astrophysics Data System (ADS)

    Zhang, Hai; Kondragunta, Shobha; Laszlo, Istvan; Liu, Hongqing; Remer, Lorraine A.; Huang, Jingfeng; Superczynski, Stephen; Ciren, Pubu

    2016-09-01

    The Visible/Infrared Imager Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite has been retrieving aerosol optical thickness (AOT), operationally and globally, over ocean and land since shortly after S-NPP launch in 2011. However, the current operational VIIRS AOT retrieval algorithm over land has two limitations in its assumptions for land surfaces: (1) it only retrieves AOT over the dark surfaces and (2) it assumes that the global surface reflectance ratios between VIIRS bands are constants. In this work, we develop a surface reflectance ratio database over land with a spatial resolution 0.1° × 0.1° using 2 years of VIIRS top of atmosphere reflectances. We enhance the current operational VIIRS AOT retrieval algorithm by applying the surface reflectance ratio database in the algorithm. The enhanced algorithm is able to retrieve AOT over both dark and bright surfaces. Over bright surfaces, the VIIRS AOT retrievals from the enhanced algorithm have a correlation of 0.79, mean bias of -0.008, and standard deviation (STD) of error of 0.139 when compared against the ground-based observations at the global AERONET (Aerosol Robotic Network) sites. Over dark surfaces, the VIIRS AOT retrievals using the surface reflectance ratio database improve the root-mean-square error from 0.150 to 0.123. The use of the surface reflectance ratio database also increases the data coverage of more than 20% over dark surfaces. The AOT retrievals over bright surfaces are comparable to MODIS Deep Blue AOT retrievals.

  11. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  12. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  13. Hybrid algorithms for fuzzy reverse supply chain network design.

    PubMed

    Che, Z H; Chiang, Tzu-An; Kuo, Y C; Cui, Zhihua

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods.

  14. Hybrid Algorithms for Fuzzy Reverse Supply Chain Network Design

    PubMed Central

    Che, Z. H.; Chiang, Tzu-An; Kuo, Y. C.

    2014-01-01

    In consideration of capacity constraints, fuzzy defect ratio, and fuzzy transport loss ratio, this paper attempted to establish an optimized decision model for production planning and distribution of a multiphase, multiproduct reverse supply chain, which addresses defects returned to original manufacturers, and in addition, develops hybrid algorithms such as Particle Swarm Optimization-Genetic Algorithm (PSO-GA), Genetic Algorithm-Simulated Annealing (GA-SA), and Particle Swarm Optimization-Simulated Annealing (PSO-SA) for solving the optimized model. During a case study of a multi-phase, multi-product reverse supply chain network, this paper explained the suitability of the optimized decision model and the applicability of the algorithms. Finally, the hybrid algorithms showed excellent solving capability when compared with original GA and PSO methods. PMID:24892057

  15. A real-time ECG data compression and transmission algorithm for an e-health device.

    PubMed

    Lee, SangJoon; Kim, Jungkuk; Lee, Myoungho

    2011-09-01

    This paper introduces a real-time data compression and transmission algorithm between e-health terminals for a periodic ECGsignal. The proposed algorithm consists of five compression procedures and four reconstruction procedures. In order to evaluate the performance of the proposed algorithm, the algorithm was applied to all 48 recordings of MIT-BIH arrhythmia database, and the compress ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), rms, SNR, and quality score (QS) values were obtained. The result showed that the CR was 27.9:1 and the PRD was 2.93 on average for all 48 data instances with a 15% window size. In addition, the performance of the algorithm was compared to those of similar algorithms introduced recently by others. It was found that the proposed algorithm showed clearly superior performance in all 48 data instances at a compression ratio lower than 15:1, whereas it showed similar or slightly inferior PRD performance for a data compression ratio higher than 20:1. In light of the fact that the similarity with the original data becomes meaningless when the PRD is higher than 2, the proposed algorithm shows significantly better performance compared to the performance levels of other algorithms. Moreover, because the algorithm can compress and transmit data in real time, it can be served as an optimal biosignal data transmission method for limited bandwidth communication between e-health devices.

  16. Simulated tempering based on global balance or detailed balance conditions: Suwa-Todo, heat bath, and Metropolis algorithms.

    PubMed

    Mori, Yoshiharu; Okumura, Hisashi

    2015-12-05

    Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.

  17. Evaluation of amplitude-based sorting algorithm to reduce lung tumor blurring in PET images using 4D NCAT phantom.

    PubMed

    Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony

    2007-08-01

    develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.

  18. SeaWiFS Technical Report Series. Volume 29; The SeaWiFS CZCS-Type Pigment Algorithm

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Aiken, James; Moore, Gerald F.; Trees, Charles C.; Clark, Dennis K.

    1995-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission will provide operational ocean color that will be superior to the previous Coastal Zone Color Sensor (CZCS) proof-of-concept mission. An algorithm is needed that exploits the full functionality of SeaWiFS whilst remaining compatible in concept with algorithms used for the CZCS. This document describes the theoretical rationale of radiance band-ratio methods for determining chlorophyll-a and other important biogeochemical parameters, and their implementation for the SeaWIFS mission. Pigment interrelationships are examined to explain the success of the CZCS algorithms. In the context where chlorophyll-a absorbs only weakly at 520 nm, the success of the 520 nm to 550 nm CZCS band ratio needs to be explained. This is explained by showing that in pigment data from a range of oceanic provinces chlorophyll-a (absorbing at less than 490 nm), carotenoids (absorbing at greater than 460 nm), and total pigment are highly correlated. Correlations within pigment groups particularly photoprotectant and photosynthetic carotenoids are less robust. The sources of variability in optical data are examined using the NIMBUS Experiment Team (NET) bio-optical data set and bio-optical model. In both the model and NET data, the majority of the variance in the optical data is attributed to variability in pigment (chlorophyll-a), and total particulates, with less than 5% of the variability resulting from pigment assemblage. The relationships between band ratios and chlorophyll is examined analytically, and a new formulation based on a dual hyperbolic model is suggested which gives a better calibration curve than the conventional log-log linear regression fit. The new calibration curve shows the 490:555 ratio is the best single-band ratio and is the recommended CZCS-type pigment algorithm. Using both the model and NET data, a number of multiband algorithms are developed; the best of which is an algorithm based on the 443:555 and 490:555 ratios. From model data, the form of potential algorithms for other products, such as total particulates and dissolved organic matter (DOM), are suggested.

  19. Fish to meat intake ratio and cooking oils are associated with hepatitis C virus carriers with persistently normal alanine aminotransferase levels.

    PubMed

    Otsuka, Momoka; Uchida, Yuki; Kawaguchi, Takumi; Taniguchi, Eitaro; Kawaguchi, Atsushi; Kitani, Shingo; Itou, Minoru; Oriishi, Tetsuharu; Kakuma, Tatsuyuki; Tanaka, Suiko; Yagi, Minoru; Sata, Michio

    2012-10-01

      Dietary habits are involved in the development of chronic inflammation; however, the impact of dietary profiles of hepatitis C virus carriers with persistently normal alanine transaminase levels (HCV-PNALT) remains unclear. The decision-tree algorithm is a data-mining statistical technique, which uncovers meaningful profiles of factors from a data collection. We aimed to investigate dietary profiles associated with HCV-PNALT using a decision-tree algorithm.   Twenty-seven HCV-PNALT and 41 patients with chronic hepatitis C were enrolled in this study. Dietary habit was assessed using a validated semiquantitative food frequency questionnaire. A decision-tree algorithm was created by dietary variables, and was evaluated by area under the receiver operating characteristic curve analysis (AUROC).   In multivariate analysis, fish to meat ratio, dairy product and cooking oils were identified as independent variables associated with HCV-PNALT. The decision-tree algorithm was created with two variables: a fish to meat ratio and cooking oils/ideal bodyweight. When subjects showed a fish to meat ratio of 1.24 or more, 68.8% of the subjects were HCV-PNALT. On the other hand, 11.5% of the subjects were HCV-PNALT when subjects showed a fish to meat ratio of less than 1.24 and cooking oil/ideal bodyweight of less than 0.23 g/kg. The difference in the proportion of HCV-PNALT between these groups are significant (odds ratio 16.87, 95% CI 3.40-83.67, P = 0.0005). Fivefold cross-validation of the decision-tree algorithm showed an AUROC of 0.6947 (95% CI 0.5656-0.8238, P = 0.0067).   The decision-tree algorithm disclosed that fish to meat ratio and cooking oil/ideal bodyweight were associated with HCV-PNALT. © 2012 The Japan Society of Hepatology.

  20. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  1. Numerical Conformal Mapping Using Cross-Ratios and Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Driscoll, Tobin A.; Vavasis, Stephen A.

    1996-01-01

    We propose a new algorithm for computing the Riemann mapping of the unit disk to a polygon, also known as the Schwarz-Christoffel transformation. The new algorithm, CRDT, is based on cross-ratios of the prevertices, and also on cross-ratios of quadrilaterals in a Delaunay triangulation of the polygon. The CRDT algorithm produces an accurate representation of the Riemann mapping even in the presence of arbitrary long, thin regions in the polygon, unlike any previous conformal mapping algorithm. We believe that CRDT can never fail to converge to the correct Riemann mapping, but the correctness and convergence proof depend on conjectures that we have so far not been able to prove. We demonstrate convergence with computational experiments. The Riemann mapping has applications to problems in two-dimensional potential theory and to finite-difference mesh generation. We use CRDT to produce a mapping and solve a boundary value problem on long, thin regions for which no other algorithm can solve these problems.

  2. Parameter estimates in binary black hole collisions using neural networks

    NASA Astrophysics Data System (ADS)

    Carrillo, M.; Gracia-Linares, M.; González, J. A.; Guzmán, F. S.

    2016-10-01

    We present an algorithm based on artificial neural networks (ANNs), that estimates the mass ratio in a binary black hole collision out of given gravitational wave (GW) strains. In this analysis, the ANN is trained with a sample of GW signals generated with numerical simulations. The effectiveness of the algorithm is evaluated with GWs generated also with simulations for given mass ratios unknown to the ANN. We measure the accuracy of the algorithm in the interpolation and extrapolation regimes. We present the results for noise free signals and signals contaminated with Gaussian noise, in order to foresee the dependence of the method accuracy in terms of the signal to noise ratio.

  3. Peak-to-average power ratio reduction in orthogonal frequency division multiplexing-based visible light communication systems using a modified partial transmit sequence technique

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Deng, Honggui; Ren, Shuang; Tang, Chengying; Qian, Xuewen

    2018-01-01

    We propose an efficient partial transmit sequence technique based on genetic algorithm and peak-value optimization algorithm (GAPOA) to reduce high peak-to-average power ratio (PAPR) in visible light communication systems based on orthogonal frequency division multiplexing (VLC-OFDM). By analysis of hill-climbing algorithm's pros and cons, we propose the POA with excellent local search ability to further process the signals whose PAPR is still over the threshold after processed by genetic algorithm (GA). To verify the effectiveness of the proposed technique and algorithm, we evaluate the PAPR performance and the bit error rate (BER) performance and compare them with partial transmit sequence (PTS) technique based on GA (GA-PTS), PTS technique based on genetic and hill-climbing algorithm (GH-PTS), and PTS based on shuffled frog leaping algorithm and hill-climbing algorithm (SFLAHC-PTS). The results show that our technique and algorithm have not only better PAPR performance but also lower computational complexity and BER than GA-PTS, GH-PTS, and SFLAHC-PTS technique.

  4. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  5. Non-heuristic automatic techniques for overcoming low signal-to-noise-ratio bias of localization microscopy and multiple signal classification algorithm.

    PubMed

    Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K

    2018-03-21

    Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.

  6. Automatic intraaortic balloon pump timing using an intrabeat dicrotic notch prediction algorithm.

    PubMed

    Schreuder, Jan J; Castiglioni, Alessandro; Donelli, Andrea; Maisano, Francesco; Jansen, Jos R C; Hanania, Ramzi; Hanlon, Pat; Bovelander, Jan; Alfieri, Ottavio

    2005-03-01

    The efficacy of intraaortic balloon counterpulsation (IABP) during arrhythmic episodes is questionable. A novel algorithm for intrabeat prediction of the dicrotic notch was used for real time IABP inflation timing control. A windkessel model algorithm was used to calculate real-time aortic flow from aortic pressure. The dicrotic notch was predicted using a percentage of calculated peak flow. Automatic inflation timing was set at intrabeat predicted dicrotic notch and was combined with automatic IAB deflation. Prophylactic IABP was applied in 27 patients with low ejection fraction (< 35%) undergoing cardiac surgery. Analysis of IABP at a 1:4 ratio revealed that IAB inflation occurred at a mean of 0.6 +/- 5 ms from the dicrotic notch. In all patients accurate automatic timing at a 1:1 assist ratio was performed. Seventeen patients had episodes of severe arrhythmia, the novel IABP inflation algorithm accurately assisted 318 of 320 arrhythmic beats at a 1:1 ratio. The novel real-time intrabeat IABP inflation timing algorithm performed accurately in all patients during both regular rhythms and severe arrhythmia, allowing fully automatic intrabeat IABP timing.

  7. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  8. A Novel Fast and Secure Approach for Voice Encryption Based on DNA Computing

    NASA Astrophysics Data System (ADS)

    Kakaei Kate, Hamidreza; Razmara, Jafar; Isazadeh, Ayaz

    2018-06-01

    Today, in the world of information communication, voice information has a particular importance. One way to preserve voice data from attacks is voice encryption. The encryption algorithms use various techniques such as hashing, chaotic, mixing, and many others. In this paper, an algorithm is proposed for voice encryption based on three different schemes to increase flexibility and strength of the algorithm. The proposed algorithm uses an innovative encoding scheme, the DNA encryption technique and a permutation function to provide a secure and fast solution for voice encryption. The algorithm is evaluated based on various measures including signal to noise ratio, peak signal to noise ratio, correlation coefficient, signal similarity and signal frequency content. The results demonstrate applicability of the proposed method in secure and fast encryption of voice files

  9. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    NASA Astrophysics Data System (ADS)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  10. Enhancements to the caliop aerosol subtyping and lidar ratio selection algorithms for level II version 4

    NASA Astrophysics Data System (ADS)

    Omar, A.; Tackett, J.; Kim, M.-H.; Vaughan, M.; Kar, J.; Trepte, C.; Winker, D.

    2018-04-01

    Several enhancements have been implemented for the version 4 aerosol subtyping and lidar ratio selection algorithms of Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP). Version 4 eliminates the confusion between smoke and clean marine aerosols seen in version 3 by modifications to the elevated layer flag definitions used to identify smoke aerosols over the ocean. To differentiate between mixtures of dust and smoke, and dust and marine aerosols, a new aerosol type will be added in the version 4 data products. In the marine boundary layer, moderately depolarizing aerosols are no longer modeled as mixtures of dust and smoke (polluted dust) but rather as mixtures of dust and seasalt (dusty marine). Some lidar ratios have been updated in the version 4 algorithms. In particular, the dust lidar ratios have been adjusted to reflect the latest measurements and model studies.

  11. Automated volumetric evaluation of stereoscopic disc photography

    PubMed Central

    Xu, Juan; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Craig, Jamie E; Mackey, David A; Hewitt, Alex W; Schuman, Joel S

    2010-01-01

    PURPOSE: To develop a fully automated algorithm (AP) to perform a volumetric measure of the optic disc using conventional stereoscopic optic nerve head (ONH) photographs, and to compare algorithm-produced parameters with manual photogrammetry (MP), scanning laser ophthalmoscope (SLO) and optical coherence tomography (OCT) measurements. METHODS: One hundred twenty-two stereoscopic optic disc photographs (61 subjects) were analyzed. Disc area, rim area, cup area, cup/disc area ratio, vertical cup/disc ratio, rim volume and cup volume were automatically computed by the algorithm. Latent variable measurement error models were used to assess measurement reproducibility for the four techniques. RESULTS: AP had better reproducibility for disc area and cup volume and worse reproducibility for cup/disc area ratio and vertical cup/disc ratio, when the measurements were compared to the MP, SLO and OCT methods. CONCLUSION: AP provides a useful technique for an objective quantitative assessment of 3D ONH structures. PMID:20588996

  12. Optimisation of the mean boat velocity in rowing.

    PubMed

    Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P

    2012-01-01

    In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.

  13. SeaWiFS Technical Report Series. Volume 29: SeaWiFS CZCS-type pigment algorithm

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Aiken, James; Moore, Gerald F.; Trees, Charles C.; Clark, Dennis K.

    1995-01-01

    The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) mission will provide operational ocean color that will be superior to the previous Coastal Zone Color Sensor (CZCS) proof-of-concept mission. an algorithm is needed that exploits the full functionality of SeaWiFS whilst remaining compatible in concept with algorithms used for the CZCS. This document describes the theoretical rationale of radiance band-radio methods for determining chlorophyll alpha and other important biogeochemical parameters, and their implementation for the SeaWiFS mission. Pigment interrelationships are examined to explain the success of the CZCS algorithms. In the context where chlorophyll alpha absorbs only weakly at 520 nm, the success of the 520 nm to 550 nm CZCS band ratio needs to be explained. This is explained by showing that in pigment data from a range of oceanic provinces chlorophyll alpha (absorbing at less than 490 nm), carotenoids (absorbing at greater than 460 nm), and total pigment are highly correlated. Correlations within pigment groups particularly photoprotectant and photosynthetic carotenoids are less robust. The sources of variability in optical data re examined using the NIMBUS Experiment Team (NET) bio-optical data set and bio-optical model. In both the model and NET data, the majority of the variance in the optical data is attributed to variability in pigment (chlorophyll alpha, and total particulates, with less than 5% of the variability resulting from pigment assemblage. The relationships between band ratios and chlorophyll is examined analytically, and a new formulation based on a dual hyperbolic model is suggested which gives a better calibration curve than the conventional log-log linear regression fit. The new calibration curve shows that 490:555 ratio is the best single-band ratio and is the recommended CZCS-type pigment algorithm. Using both the model and NET data, a number of multiband algorithms are developed; the best of which is an algorithm based on the 443:555 and 490:555 ratios. From model data, the form of potential algorithms for other products, such as total particulates and dissolved organic matter (DOM), are suggested.

  14. Algorithm for astronomical, point source, signal to noise ratio calculations

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R.; Schroeder, D. J.

    1984-01-01

    An algorithm was developed to simulate the expected signal to noise ratios as a function of observation time in the charge coupled device detector plane of an optical telescope located outside the Earth's atmosphere for a signal star, and an optional secondary star, embedded in a uniform cosmic background. By choosing the appropriate input values, the expected point source signal to noise ratio can be computed for the Hubble Space Telescope using the Wide Field/Planetary Camera science instrument.

  15. A modern robust approach to remotely estimate chlorophyll in coastal and inland zones

    NASA Astrophysics Data System (ADS)

    Shanmugam, Palanisamy; He, Xianqiang; Singh, Rakesh Kumar; Varunan, Theenathayalan

    2018-05-01

    The chlorophyll concentration of a water body is an important proxy for representing the phytoplankton biomass. Its estimation from multi or hyper-spectral remote sensing data in natural waters is generally achieved by using (i) the waveband ratioing in two or more bands in the blue-green or (ii) by using a combination of the radiance peak position and magnitude in the red-near-infrared (NIR) spectrum. The blue-green ratio algorithms have been extensively used with satellite ocean color data to investigate chlorophyll distributions in open ocean and clear waters and the application of red-NIR algorithms is often restricted to turbid productive water bodies. These issues present the greatest obstacles to our ability to formulate a modern robust method suitable for quantitative assessments of the chlorophyll concentration in a diverse range of water types. The present study is focused to investigate the normalized water-leaving radiance spectra in the visible and NIR region and propose a robust algorithm (Generalized ABI, GABI algorithm) for chlorophyll concentration retrieval based on Algal Bloom index (ABI) which separates phytoplankton signals from other constituents in the water column. The GABI algorithm is validated using independent in-situ data from various regional to global waters and its performance is further evaluated by comparison with the blue-green waveband ratios and red-NIR algorithms. The results revealed that GABI yields significantly more accurate chlorophyll concentrations (with uncertainties less than 13.5%) and remains more stable in different waters types when compared with the blue-green waveband ratios and red-NIR algorithms. The performance of GABI is further demonstrated using HICO images from nearshore turbid productive waters and MERIS and MODIS-Aqua images from coastal and offshore waters of the Arabian Sea, Bay of Bengal and East China Sea.

  16. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  17. Seasonal and regional differentiation of bio-optical properties within the north polar Atlantic

    NASA Astrophysics Data System (ADS)

    Stramska, Malgorzata; Stramski, Dariusz; Kaczmarek, SłAwomir; Allison, David B.; Schwarz, Jill

    2006-08-01

    Using field data from the north polar Atlantic, we examined seasonal variability of the spectral absorption, a(λ), and backscattering, bb(λ), coefficients of surface waters in relation to phytoplankton pigments. For a given chlorophyll a concentration, the concentrations of accessory pigments were lower in spring than in summer. This effect contributed to lower chlorophyll-specific absorption of phytoplankton and total particulate matter in spring. The spring values of the green-to-blue band ratio of a(λ) were higher than the summer ratios. The blue-to-green ratios of bb(λ) were also higher in spring. The higher bb values and lower blue-to-green bb ratios in summer were likely associated with higher concentrations of detrital particles in summer compared to spring. Because the product of these band ratios of a and bb is a proxy for the blue-to-green ratio of remote-sensing reflectance, the performance of ocean color band-ratio algorithms for estimating pigments is significantly affected by seasonal shifts in the relationships between absorption, backscattering, and chlorophyll a. Our results suggest that the algorithm for the spring season would predict chlorophyll a that is higher by as much as a factor of 4-6 compared to that predicted from the summer algorithm. This indicates a need for a seasonal approach in the north polar Atlantic. However, we also found that a fairly good estimate of the particulate beam attenuation coefficient at 660 nm (a proxy for total particulate matter or particulate organic carbon concentration) can be obtained by applying a single blue-to-green band-ratio algorithm regardless of the season.

  18. Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel

    PubMed Central

    Akbari, Mohsen; Manesh, Mohsen Riahi

    2014-01-01

    In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725

  19. GPU-accelerated phase extraction algorithm for interferograms: a real-time application

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei

    2016-11-01

    Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.

  20. Comparison of evolutionary algorithms for LPDA antenna optimization

    NASA Astrophysics Data System (ADS)

    Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.

    2016-08-01

    A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.

  1. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less

  3. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    NASA Technical Reports Server (NTRS)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  4. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    PubMed

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  5. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  6. Application of the EM algorithm to radiographic images.

    PubMed

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  7. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  8. Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology.

    PubMed

    Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N

    2018-01-01

    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.

  9. High-order hydrodynamic algorithms for exascale computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, Nathaniel Ray

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broadmore » range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.« less

  10. Hybrid genetic algorithm in the Hopfield network for maximum 2-satisfiability problem

    NASA Astrophysics Data System (ADS)

    Kasihmuddin, Mohd Shareduwan Mohd; Sathasivam, Saratha; Mansor, Mohd. Asyraf

    2017-08-01

    Heuristic method was designed for finding optimal solution more quickly compared to classical methods which are too complex to comprehend. In this study, a hybrid approach that utilizes Hopfield network and genetic algorithm in doing maximum 2-Satisfiability problem (MAX-2SAT) was proposed. Hopfield neural network was used to minimize logical inconsistency in interpretations of logic clauses or program. Genetic algorithm (GA) has pioneered the implementation of methods that exploit the idea of combination and reproduce a better solution. The simulation incorporated with and without genetic algorithm will be examined by using Microsoft Visual 2013 C++ Express software. The performance of both searching techniques in doing MAX-2SAT was evaluate based on global minima ratio, ratio of satisfied clause and computation time. The result obtained form the computer simulation demonstrates the effectiveness and acceleration features of genetic algorithm in doing MAX-2SAT in Hopfield network.

  11. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  12. A Space-Saving Approximation Algorithm for Grammar-Based Compression

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroshi; Maruyama, Shirou; Kida, Takuya; Shimozono, Shinichi

    A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log…log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =Ω(log n) and g=\\\\Omega(\\\\log n) and g=O\\\\left(\\\\frac{n}{log_kn}\\\\right) for strings from k-letter alphabet[12].

  13. Multi-color space threshold segmentation and self-learning k-NN algorithm for surge test EUT status identification

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Liu, Gui-xiong

    2016-09-01

    The identification of targets varies in different surge tests. A multi-color space threshold segmentation and self-learning k-nearest neighbor algorithm ( k-NN) for equipment under test status identification was proposed after using feature matching to identify equipment status had to train new patterns every time before testing. First, color space (L*a*b*, hue saturation lightness (HSL), hue saturation value (HSV)) to segment was selected according to the high luminance points ratio and white luminance points ratio of the image. Second, the unknown class sample S r was classified by the k-NN algorithm with training set T z according to the feature vector, which was formed from number of pixels, eccentricity ratio, compactness ratio, and Euler's numbers. Last, while the classification confidence coefficient equaled k, made S r as one sample of pre-training set T z '. The training set T z increased to T z+1 by T z ' if T z ' was saturated. In nine series of illuminant, indicator light, screen, and disturbances samples (a total of 21600 frames), the algorithm had a 98.65%identification accuracy, also selected five groups of samples to enlarge the training set from T 0 to T 5 by itself.

  14. A social activity and physical contact-based routing algorithm in mobile opportunistic networks for emergency response to sudden disasters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoming; Lin, Yaguang; Zhang, Shanshan; Cai, Zhipeng

    2017-05-01

    Sudden disasters such as earthquake, flood and hurricane necessitate the employment of communication networks to carry out emergency response activities. Routing has a significant impact on the functionality, performance and flexibility of communication networks. In this article, the routing problem is studied considering the delivery ratio of messages, the overhead ratio of messages and the average delay of messages in mobile opportunistic networks (MONs) for enterprise-level emergency response communications in sudden disaster scenarios. Unlike the traditional routing methods for MONS, this article presents a new two-stage spreading and forwarding dynamic routing algorithm based on the proposed social activity degree and physical contact factor for mobile customers. A new modelling method for describing a dynamic evolving process of the topology structure of a MON is first proposed. Then a multi-copy spreading strategy based on the social activity degree of nodes and a single-copy forwarding strategy based on the physical contact factor between nodes are designed. Compared with the most relevant routing algorithms such as Epidemic, Prophet, Labelled-sim, Dlife-comm and Distribute-sim, the proposed routing algorithm can significantly increase the delivery ratio of messages, and decrease the overhead ratio and average delay of messages.

  15. The use of a MODIS band-ratio algorithm versus a new hybrid approach for estimating colored dissolved organic matter (CDOM)

    EPA Science Inventory

    Satellite remote sensing offers synoptic and frequent monitoring of optical water quality parameters, such as chlorophyll-a, turbidity, and colored dissolved organic matter (CDOM). While traditional satellite algorithms were developed for the open ocean, these algorithms often do...

  16. Instrument-induced spatial crosstalk deconvolution algorithm

    NASA Technical Reports Server (NTRS)

    Wright, Valerie G.; Evans, Nathan L., Jr.

    1986-01-01

    An algorithm has been developed which reduces the effects of (deconvolves) instrument-induced spatial crosstalk in satellite image data by several orders of magnitude where highly precise radiometry is required. The algorithm is based upon radiance transfer ratios which are defined as the fractional bilateral exchange of energy betwen pixels A and B.

  17. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  18. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  19. A study of hydrogen diffusion flames using PDF turbulence model

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The application of probability density function (pdf) turbulence models is addressed. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional computational fluid dynamic (CFD) flow solver with the Monte Carlo simulation of the pdf evolution equation was developed. The algorithm was validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames was carried out using this algorithm. Numerical results compared favorably with experimental data. The computations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.

  20. A study of hydrogen diffusion flames using PDF turbulence model

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The application of probability density function (pdf) turbulence models is addressed in this work. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional CFD flow solver with the Monte Carlo simulation of the pdf evolution equation has been developed. The algorithm has been validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames has been carried out using this algorithm. Numerical results compared favorably with experimental data. The computuations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.

  1. Optimal damping profile ratios for stabilization of perfectly matched layers in general anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-11-13

    Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less

  2. Optimal damping profile ratios for stabilization of perfectly matched layers in general anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less

  3. Adaptive Integration of the Compressed Algorithm of CS and NPC for the ECG Signal Compressed Algorithm in VLSI Implementation

    PubMed Central

    Tseng, Yun-Hua; Lu, Chih-Wen

    2017-01-01

    Compressed sensing (CS) is a promising approach to the compression and reconstruction of electrocardiogram (ECG) signals. It has been shown that following reconstruction, most of the changes between the original and reconstructed signals are distributed in the Q, R, and S waves (QRS) region. Furthermore, any increase in the compression ratio tends to increase the magnitude of the change. This paper presents a novel approach integrating the near-precise compressed (NPC) and CS algorithms. The simulation results presented notable improvements in signal-to-noise ratio (SNR) and compression ratio (CR). The efficacy of this approach was verified by fabricating a highly efficient low-cost chip using the Taiwan Semiconductor Manufacturing Company’s (TSMC) 0.18-μm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The proposed core has an operating frequency of 60 MHz and gate counts of 2.69 K. PMID:28991216

  4. Reflectance model for quantifying chlorophyll a in the presence of productivity degradation products

    NASA Technical Reports Server (NTRS)

    Carder, K. L.; Hawes, S. K.; Steward, R. G.; Baker, K. A.; Smith, R. C.; Mitchell, B. G.

    1991-01-01

    A reflectance model developed to estimate chlorophyll a concentrations in the presence of marine colored dissolved organic matter, pheopigments, detritus, and bacteria is presented. Nomograms and lookup tables are generated to describe the effects of different mixtures of chlorophyll a and these degradation products on the R(412):R(443) and R(443):R(565) remote-sensing reflectance or irradiance reflectance ratios. These are used to simulate the accuracy of potential ocean color satellite algorithms, assuming that atmospheric effects have been removed. For the California Current upwelling and offshore regions, with chlorophyll a not greater than 1.3 mg/cu m, the average error for chlorophyll a retrievals derived from irradiance reflectance data for degradation product-rich areas was reduced from +/-61 percent to +/-23 percent by application of an algorithm using two reflectance ratios rather than the commonly used algorithm applying a single reflectance ratio.

  5. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  6. A pragmatic evidence-based clinical management algorithm for burning mouth syndrome.

    PubMed

    Kim, Yohanan; Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C

    2018-04-01

    Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p =0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group ( p =0.001), with an odds ratio of 27.5 [3.1, 242.0]. We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words: Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment.

  7. The advanced progress of precoding technology in 5g system

    NASA Astrophysics Data System (ADS)

    An, Chenyi

    2017-09-01

    With the development of technology, people began to put forward higher requirements for the mobile system, the emergence of the 5G subvert the track of the development of mobile communication technology. In the research of the core technology of 5G mobile communication, large scale MIMO, and precoding technology is a research hotspot. At present, the research on precoding technology in 5G system analyzes the various methods of linear precoding, the maximum ratio transmission (MRT) precoding algorithm, zero forcing (ZF) precoding algorithm, minimum mean square error (MMSE) precoding algorithm based on maximum signal to leakage and noise ratio (SLNR). Precoding algorithms are analyzed and summarized in detail. At the same time, we also do some research on nonlinear precoding methods, such as dirty paper precoding, THP precoding algorithm and so on. Through these analysis, we can find the advantages and disadvantages of each algorithm, as well as the development trend of each algorithm, grasp the development of the current 5G system precoding technology. Therefore, the research results and data of this paper can be used as reference for the development of precoding technology in 5G system.

  8. Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.

    PubMed

    Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-12-30

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.

  9. An Automatic Image Processing System for Glaucoma Screening

    PubMed Central

    Alodhayb, Sami; Lakshminarayanan, Vasudevan

    2017-01-01

    Horizontal and vertical cup to disc ratios are the most crucial parameters used clinically to detect glaucoma or monitor its progress and are manually evaluated from retinal fundus images of the optic nerve head. Due to the rarity of the glaucoma experts as well as the increasing in glaucoma's population, an automatically calculated horizontal and vertical cup to disc ratios (HCDR and VCDR, resp.) can be useful for glaucoma screening. We report on two algorithms to calculate the HCDR and VCDR. In the algorithms, level set and inpainting techniques were developed for segmenting the disc, while thresholding using Type-II fuzzy approach was developed for segmenting the cup. The results from the algorithms were verified using the manual markings of images from a dataset of glaucomatous images (retinal fundus images for glaucoma analysis (RIGA dataset)) by six ophthalmologists. The algorithm's accuracy for HCDR and VCDR combined was 74.2%. Only the accuracy of manual markings by one ophthalmologist was higher than the algorithm's accuracy. The algorithm's best agreement was with markings by ophthalmologist number 1 in 230 images (41.8%) of the total tested images. PMID:28947898

  10. Comparing Binaural Pre-processing Strategies I

    PubMed Central

    Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-01-01

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920

  11. Decoding algorithm for vortex communications receiver

    NASA Astrophysics Data System (ADS)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  12. Estimators of wheel slip for electric vehicles using torque and encoder measurements

    NASA Astrophysics Data System (ADS)

    Boisvert, M.; Micheau, P.

    2016-08-01

    For the purpose of regenerative braking control in hybrid and electrical vehicles, recent studies have suggested controlling the slip ratio of the electric-powered wheel. A slip tracking controller requires an accurate slip estimation in the overall range of the slip ratio (from 0 to 1), contrary to the conventional slip limiter (ABS) which calls for an accurate slip estimation in the critical slip area, estimated at around 0.15 in several applications. Considering that it is not possible to directly measure the slip ratio of a wheel, the problem is to estimate the latter from available online data. To estimate the slip of a wheel, both wheel speed and vehicle speed must be known. Several studies provide algorithms that allow obtaining a good estimation of vehicle speed. On the other hand, there is no proposed algorithm for the conditioning of the wheel speed measurement. Indeed, the noise included in the wheel speed measurement reduces the accuracy of the slip estimation, a disturbance increasingly significant at low speed and low torque. Herein, two different extended Kalman observers of slip ratio were developed. The first calculates the slip ratio with data provided by an observer of vehicle speed and of propeller wheel speed. The second observer uses an original nonlinear model of the slip ratio as a function of the electric motor. A sinus tracking algorithm is included in the two observers, in order to reject harmonic disturbances of wheel speed measurement. Moreover, mass and road uncertainties can be compensated with a coefficient adapted online by an RLS. The algorithms were implemented and tested with a three-wheel recreational hybrid vehicle. Experimental results show the efficiency of both methods.

  13. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    PubMed

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  14. Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height

    NASA Astrophysics Data System (ADS)

    Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang

    2018-02-01

    This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.

  15. Use of sexually transmitted disease risk assessment algorithms for selection of intrauterine device candidates.

    PubMed

    Morrison, C S; Sekadde-Kigondu, C; Miller, W C; Weiner, D H; Sinei, S K

    1999-02-01

    Sexually transmitted diseases (STD) are an important contraindication for intrauterine device (IUD) insertion. Nevertheless, laboratory testing for STD is not possible in many settings. The objective of this study is to evaluate the use of risk assessment algorithms to predict STD and subsequent IUD-related complications among IUD candidates. Among 615 IUD users in Kenya, the following algorithms were evaluated: 1) an STD algorithm based on US Agency for International Development (USAID) Technical Working Group guidelines: 2) a Centers for Disease Control and Prevention (CDC) algorithm for management of chlamydia; and 3) a data-derived algorithm modeled from study data. Algorithms were evaluated for prediction of chlamydial and gonococcal infection at 1 month and complications (pelvic inflammatory disease [PID], IUD removals, and IUD expulsions) over 4 months. Women with STD were more likely to develop complications than women without STD (19% vs 6%; risk ratio = 2.9; 95% CI 1.3-6.5). For STD prediction, the USAID algorithm was 75% sensitive and 48% specific, with a positive likelihood ratio (LR+) of 1.4. The CDC algorithm was 44% sensitive and 72% specific, LR+ = 1.6. The data-derived algorithm was 91% sensitive and 56% specific, with LR+ = 2.0 and LR- = 0.2. Category-specific LR for this algorithm identified women with very low (< 1%) and very high (29%) infection probabilities. The data-derived algorithm was also the best predictor of IUD-related complications. These results suggest that use of STD algorithms may improve selection of IUD users. Women at high risk for STD could be counseled to avoid IUD, whereas women at moderate risk should be monitored closely and counseled to use condoms.

  16. A pragmatic evidence-based clinical management algorithm for burning mouth syndrome

    PubMed Central

    Yoo, Timothy; Han, Peter; Liu, Yuan; Inman, Jared C.

    2018-01-01

    Background Burning mouth syndrome is a poorly understood disease process with no current standard of treatment. The goal of this article is to provide an evidence-based, practical, clinical algorithm as a guideline for the treatment of burning mouth syndrome. Material and Methods Using available evidence and clinical experience, a multi-step management algorithm was developed. A retrospective cohort study was then performed, following STROBE statement guidelines, comparing outcomes of patients who were managed using the algorithm and those who were managed without. Results Forty-seven patients were included in the study, with 21 (45%) managed using the algorithm and 26 (55%) managed without. The mean age overall was 60.4 ±16.5 years, and most patients (39, 83%) were female. Cohorts showed no statistical difference in age, sex, overall follow-up time, dysgeusia, geographic tongue, or psychiatric disorder; xerostomia, however, was significantly different, skewed toward the algorithm group. Significantly more non-algorithm patients did not continue care (69% vs. 29%, p=0.001). The odds ratio of not continuing care for the non-algorithm group compared to the algorithm group was 5.6 [1.6, 19.8]. Improvement in pain was significantly more likely in the algorithm group (p=0.001), with an odds ratio of 27.5 [3.1, 242.0]. Conclusions We present a basic clinical management algorithm for burning mouth syndrome which may increase the likelihood of pain improvement and patient follow-up. Key words:Burning mouth syndrome, burning tongue, glossodynia, oral pain, oral burning, therapy, treatment. PMID:29750091

  17. Stochastic resonance investigation of object detection in images

    NASA Astrophysics Data System (ADS)

    Repperger, Daniel W.; Pinkus, Alan R.; Skipper, Julie A.; Schrider, Christina D.

    2007-02-01

    Object detection in images was conducted using a nonlinear means of improving signal to noise ratio termed "stochastic resonance" (SR). In a recent United States patent application, it was shown that arbitrarily large signal to noise ratio gains could be realized when a signal detection problem is cast within the context of a SR filter. Signal-to-noise ratio measures were investigated. For a binary object recognition task (friendly versus hostile), the method was implemented by perturbing the recognition algorithm and subsequently thresholding via a computer simulation. To fairly test the efficacy of the proposed algorithm, a unique database of images has been constructed by modifying two sample library objects by adjusting their brightness, contrast and relative size via commercial software to gradually compromise their saliency to identification. The key to the use of the SR method is to produce a small perturbation in the identification algorithm and then to threshold the results, thus improving the overall system's ability to discern objects. A background discussion of the SR method is presented. A standard test is proposed in which object identification algorithms could be fairly compared against each other with respect to their relative performance.

  18. Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, Charles Augustus

    An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists ofmore » two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.« less

  19. Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging

    PubMed Central

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan

    2015-01-01

    Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510

  20. Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.

    PubMed

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan

    2015-11-20

    Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.

  1. Cost-effective analysis of different algorithms for the diagnosis of hepatitis C virus infection.

    PubMed

    Barreto, A M E C; Takei, K; E C, Sabino; Bellesa, M A O; Salles, N A; Barreto, C C; Nishiya, A S; Chamone, D F

    2008-02-01

    We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio > or =95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.

  2. An algorithm for extraction of periodic signals from sparse, irregularly sampled data

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1994-01-01

    Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.

  3. [Lossless ECG compression algorithm with anti- electromagnetic interference].

    PubMed

    Guan, Shu-An

    2005-03-01

    Based on the study of ECG signal features, a new lossless ECG compression algorithm is put forward here. We apply second-order difference operation with anti- electromagnetic interference to original ECG signals and then, compress the result by the escape-based coding model. In spite of serious 50Hz-interference, the algorithm is still capable of obtaining a high compression ratio.

  4. Weighted graph cuts without eigenvectors a multilevel approach.

    PubMed

    Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian

    2007-11-01

    A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.

  5. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data.

    PubMed

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  6. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  7. OSLG: A new granting scheme in WDM Ethernet passive optical networks

    NASA Astrophysics Data System (ADS)

    Razmkhah, Ali; Rahbar, Akbar Ghaffarpour

    2011-12-01

    Several granting schemes have been proposed to grant transmission window and dynamic bandwidth allocation (DBA) in passive optical networks (PON). Generally, granting schemes suffer from bandwidth wastage of granted windows. Here, we propose a new granting scheme for WDM Ethernet PONs, called optical network unit (ONU) Side Limited Granting (OSLG) that conserves upstream bandwidth, thus resulting in decreasing queuing delay and packet drop ratio. In OSLG instead of optical line terminal (OLT), each ONU determines its transmission window. Two OSLG algorithms are proposed in this paper: the OSLG_GA algorithm that determines the size of its transmission window in such a way that the bandwidth wastage problem is relieved, and the OSLG_SC algorithm that saves unused bandwidth for more bandwidth utilization later on. The OSLG can be used as granting scheme of any DBA to provide better performance in the terms of packet drop ratio and queuing delay. Our performance evaluations show the effectiveness of OSLG in reducing packet drop ratio and queuing delay under different DBA techniques.

  8. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    PubMed

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  9. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering.

    PubMed

    Gong, Maoguo; Zhou, Zhiqiang; Ma, Jingjing

    2012-04-01

    This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.

  10. Avoidance of speckle noise in laser vibrometry by the use of kurtosis ratio: Application to mechanical fault diagnostics

    NASA Astrophysics Data System (ADS)

    Vass, J.; Šmíd, R.; Randall, R. B.; Sovka, P.; Cristalli, C.; Torcianti, B.

    2008-04-01

    This paper presents a statistical technique to enhance vibration signals measured by laser Doppler vibrometry (LDV). The method has been optimised for LDV signals measured on bearings of universal electric motors and applied to quality control of washing machines. Inherent problems of LDV are addressed, particularly the speckle noise occurring when rough surfaces are measured. The presence of speckle noise is detected using a new scalar indicator kurtosis ratio (KR), specifically designed to quantify the amount of random impulses generated by this noise. The KR is a ratio of the standard kurtosis and a robust estimate of kurtosis, thus indicating the outliers in the data. Since it is inefficient to reject the signals affected by the speckle noise, an algorithm for selecting an undistorted portion of a signal is proposed. The algorithm operates in the time domain and is thus fast and simple. The algorithm includes band-pass filtering and segmentation of the signal, as well as thresholding of the KR computed for each filtered signal segment. Algorithm parameters are discussed in detail and instructions for optimisation are provided. Experimental results demonstrate that speckle noise is effectively avoided in severely distorted signals, thus improving the signal-to-noise ratio (SNR) significantly. Typical faults are finally detected using squared envelope analysis. It is also shown that the KR of the band-pass filtered signal is related to the spectral kurtosis (SK).

  11. Golden Ratio Genetic Algorithm Based Approach for Modelling and Analysis of the Capacity Expansion of Urban Road Traffic Network

    PubMed Central

    Zhang, Lun; Zhang, Meng; Yang, Wenchen; Dong, Decun

    2015-01-01

    This paper presents the modelling and analysis of the capacity expansion of urban road traffic network (ICURTN). Thebilevel programming model is first employed to model the ICURTN, in which the utility of the entire network is maximized with the optimal utility of travelers' route choice. Then, an improved hybrid genetic algorithm integrated with golden ratio (HGAGR) is developed to enhance the local search of simple genetic algorithms, and the proposed capacity expansion model is solved by the combination of the HGAGR and the Frank-Wolfe algorithm. Taking the traditional one-way network and bidirectional network as the study case, three numerical calculations are conducted to validate the presented model and algorithm, and the primary influencing factors on extended capacity model are analyzed. The calculation results indicate that capacity expansion of road network is an effective measure to enlarge the capacity of urban road network, especially on the condition of limited construction budget; the average computation time of the HGAGR is 122 seconds, which meets the real-time demand in the evaluation of the road network capacity. PMID:25802512

  12. An automated cross-correlation based event detection technique and its application to surface passive data set

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike

    2013-01-01

    In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.

  13. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  14. Toward real-time quantification of fluorescence molecular probes using target/background ratio for guiding biopsy and endoscopic therapy of esophageal neoplasia.

    PubMed

    Jiang, Yang; Gong, Yuanzheng; Rubenstein, Joel H; Wang, Thomas D; Seibel, Eric J

    2017-04-01

    Multimodal endoscopy using fluorescence molecular probes is a promising method of surveying the entire esophagus to detect cancer progression. Using the fluorescence ratio of a target compared to a surrounding background, a quantitative value is diagnostic for progression from Barrett's esophagus to high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC). However, current quantification of fluorescent images is done only after the endoscopic procedure. We developed a Chan-Vese-based algorithm to segment fluorescence targets, and subsequent morphological operations to generate background, thus calculating target/background (T/B) ratios, potentially to provide real-time guidance for biopsy and endoscopic therapy. With an initial processing speed of 2 fps and by calculating the T/B ratio for each frame, our method provides quasireal-time quantification of the molecular probe labeling to the endoscopist. Furthermore, an automatic computer-aided diagnosis algorithm can be applied to the recorded endoscopic video, and the overall T/B ratio is calculated for each patient. The receiver operating characteristic curve was employed to determine the threshold for classification of HGD/EAC using leave-one-out cross-validation. With 92% sensitivity and 75% specificity to classify HGD/EAC, our automatic algorithm shows promising results for a surveillance procedure to help manage esophageal cancer and other cancers inspected by endoscopy.

  15. Nutrient Stress Detection in Corn Using Neural Networks and AVIRIS Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Lee

    2001-01-01

    AVIRIS image cube data has been processed for the detection of nutrient stress in corn by both known, ratio-type algorithms and by trained neural networks. The USDA Shelton, NE, ARS Variable Rate Nitrogen Application (VRAT) experimental farm was the site used in the study. Upon application of ANOVA and Dunnett multiple comparsion tests on the outcome of both the neural network processing and the ratio-type algorithm results, it was found that the neural network methodology provides a better overall capability to separate nutrient stressed crops from in-field controls.

  16. Chlorophyll-a Algorithms for Oligotrophic Oceans: A Novel Approach Based on Three-Band Reflectance Difference

    NASA Technical Reports Server (NTRS)

    Hu, Chuanmin; Lee, Zhongping; Franz, Bryan

    2011-01-01

    A new empirical algorithm is proposed to estimate surface chlorophyll-a concentrations (Chl) in the global ocean for Chl less than or equal to 0.25 milligrams per cubic meters (approximately 77% of the global ocean area). The algorithm is based on a color index (CI), defined as the difference between remote sensing reflectance (R(sub rs), sr(sup -1) in the green and a reference formed linearly between R(sub rs) in the blue and red. For low Chl waters, in situ data showed a tighter (and therefore better) relationship between CI and Chl than between traditional band-ratios and Chl, which was further validated using global data collected concurrently by ship-borne and SeaWiFS satellite instruments. Model simulations showed that for low Chl waters, compared with the band-ratio algorithm, the CI-based algorithm (CIA) was more tolerant to changes in chlorophyll-specific backscattering coefficient, and performed similarly for different relative contributions of non-phytoplankton absorption. Simulations using existing atmospheric correction approaches further demonstrated that the CIA was much less sensitive than band-ratio algorithms to various errors induced by instrument noise and imperfect atmospheric correction (including sun glint and whitecap corrections). Image and time-series analyses of SeaWiFS and MODIS/Aqua data also showed improved performance in terms of reduced image noise, more coherent spatial and temporal patterns, and consistency between the two sensors. The reduction in noise and other errors is particularly useful to improve the detection of various ocean features such as eddies. Preliminary tests over MERIS and CZCS data indicate that the new approach should be generally applicable to all existing and future ocean color instruments.

  17. Parametric binary dissection

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Crockett, Thomas W.; Nicol, David M.

    1993-01-01

    Binary dissection is widely used to partition non-uniform domains over parallel computers. This algorithm does not consider the perimeter, surface area, or aspect ratio of the regions being generated and can yield decompositions that have poor communication to computation ratio. Parametric Binary Dissection (PBD) is a new algorithm in which each cut is chosen to minimize load + lambda x(shape). In a 2 (or 3) dimensional problem, load is the amount of computation to be performed in a subregion and shape could refer to the perimeter (respectively surface) of that subregion. Shape is a measure of communication overhead and the parameter permits us to trade off load imbalance against communication overhead. When A is zero, the algorithm reduces to plain binary dissection. This algorithm can be used to partition graphs embedded in 2 or 3-d. Load is the number of nodes in a subregion, shape the number of edges that leave that subregion, and lambda the ratio of time to communicate over an edge to the time to compute at a node. An algorithm is presented that finds the depth d parametric dissection of an embedded graph with n vertices and e edges in O(max(n log n, de)) time, which is an improvement over the O(dn log n) time of plain binary dissection. Parallel versions of this algorithm are also presented; the best of these requires O((n/p) log(sup 3)p) time on a p processor hypercube, assuming graphs of bounded degree. How PBD is applied to 3-d unstructured meshes and yields partitions that are better than those obtained by plain dissection is described. Its application to the color image quantization problem is also discussed, in which samples in a high-resolution color space are mapped onto a lower resolution space in a way that minimizes the color error.

  18. Spectrum sensing algorithm based on autocorrelation energy in cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Ren, Shengwei; Zhang, Li; Zhang, Shibing

    2016-10-01

    Cognitive radio networks have wide applications in the smart home, personal communications and other wireless communication. Spectrum sensing is the main challenge in cognitive radios. This paper proposes a new spectrum sensing algorithm which is based on the autocorrelation energy of signal received. By taking the autocorrelation energy of the received signal as the statistics of spectrum sensing, the effect of the channel noise on the detection performance is reduced. Simulation results show that the algorithm is effective and performs well in low signal-to-noise ratio. Compared with the maximum generalized eigenvalue detection (MGED) algorithm, function of covariance matrix based detection (FMD) algorithm and autocorrelation-based detection (AD) algorithm, the proposed algorithm has 2 11 dB advantage.

  19. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data

    NASA Astrophysics Data System (ADS)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Objective. Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. Approach. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. Main results. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. Significance. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  20. Evaluating the effect of online data compression on the disk cache of a mass storage system

    NASA Technical Reports Server (NTRS)

    Pentakalos, Odysseas I.; Yesha, Yelena

    1994-01-01

    A trace driven simulation of the disk cache of a mass storage system was used to evaluate the effect of an online compression algorithm on various performance measures. Traces from the system at NASA's Center for Computational Sciences were used to run the simulation and disk cache hit ratios, number of files and bytes migrating to tertiary storage were measured. The measurements were performed for both an LRU and a size based migration algorithm. In addition to seeing the effect of online data compression on the disk cache performance measure, the simulation provided insight into the characteristics of the interactive references, suggesting that hint based prefetching algorithms are the only alternative for any future improvements to the disk cache hit ratio.

  1. Optimal line drop compensation parameters under multi-operating conditions

    NASA Astrophysics Data System (ADS)

    Wan, Yuan; Li, Hang; Wang, Kai; He, Zhe

    2017-01-01

    Line Drop Compensation (LDC) is a main function of Reactive Current Compensation (RCC) which is developed to improve voltage stability. While LDC has benefit to voltage, it may deteriorate the small-disturbance rotor angle stability of power system. In present paper, an intelligent algorithm which is combined by Genetic Algorithm (GA) and Backpropagation Neural Network (BPNN) is proposed to optimize parameters of LDC. The objective function proposed in present paper takes consideration of voltage deviation and power system oscillation minimal damping ratio under multi-operating conditions. A simulation based on middle area of Jiangxi province power system is used to demonstrate the intelligent algorithm. The optimization result shows that coordinate optimized parameters can meet the multioperating conditions requirement and improve voltage stability as much as possible while guaranteeing enough damping ratio.

  2. SPH investigation of the thermal effects on the fluid mixing in a microchannel with rotating stirrers

    NASA Astrophysics Data System (ADS)

    Shamsoddini, Rahim

    2018-04-01

    An incompressible smoothed particle hydrodynamics algorithm is proposed to model and investigate the thermal effect on the mixing rate of an active micromixer in which the rotating stirrers enhance the mixing rate. In liquids, mass diffusion increases with increasing temperature, while viscosity decreases; so, the local Schmidt number decreases considerably with increasing temperature. The present study investigates the effect of wall temperature on mixing rate with an improved SPH method. The robust SPH method used in the present work is equipped with a shifting algorithm and renormalization tensors. By introducing this new algorithm, the several mass, momentum, energy, and concentration equations are solved. The results, discussed for different temperature ratios, show that mixing rate increases significantly with increased temperature ratio.

  3. A new bio-optical algorithm for the remote sensing of algal blooms in complex ocean waters

    NASA Astrophysics Data System (ADS)

    Shanmugam, Palanisamy

    2011-04-01

    A new bio-optical algorithm has been developed to provide accurate assessments of chlorophyll a (Chl a) concentration for detection and mapping of algal blooms from satellite data in optically complex waters, where the presence of suspended sediments and dissolved substances can interfere with phytoplankton signal and thus confound conventional band ratio algorithms. A global data set of concurrent measurements of pigment concentration and radiometric reflectance was compiled and used to develop this algorithm that uses the normalized water-leaving radiance ratios along with an algal bloom index (ABI) between three visible bands to determine Chl a concentrations. The algorithm is derived using Sea-viewing Wide Field-of-view Sensor bands, and it is subsequently tuned to be applicable to Moderate Resolution Imaging Spectroradiometer (MODIS)/Aqua data. When compared with large in situ data sets and satellite matchups in a variety of coastal and ocean waters the present algorithm makes good retrievals of the Chl a concentration and shows statistically significant improvement over current global algorithms (e.g., OC3 and OC4v4). An examination of the performance of these algorithms on several MODIS/Aqua images in complex waters of the Arabian Sea and west Florida shelf shows that the new algorithm provides a better means for detecting and differentiating algal blooms from other turbid features, whereas the OC3 algorithm has significant errors although yielding relatively consistent results in clear waters. These findings imply that, provided that an accurate atmospheric correction scheme is available to deal with complex waters, the current MODIS/Aqua, MERIS and OCM data could be extensively used for quantitative and operational monitoring of algal blooms in various regional and global waters.

  4. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  5. Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.

    PubMed

    Ruddick, K G; Ovidio, F; Rijkeboer, M

    2000-02-20

    The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.

  6. Node Self-Deployment Algorithm Based on Pigeon Swarm Optimization for Underwater Wireless Sensor Networks

    PubMed Central

    Yu, Shanen; Xu, Yiming; Jiang, Peng; Wu, Feng; Xu, Huan

    2017-01-01

    At present, free-to-move node self-deployment algorithms aim at event coverage and cannot improve network coverage under the premise of considering network connectivity, network reliability and network deployment energy consumption. Thus, this study proposes pigeon-based self-deployment algorithm (PSA) for underwater wireless sensor networks to overcome the limitations of these existing algorithms. In PSA, the sink node first finds its one-hop nodes and maximizes the network coverage in its one-hop region. The one-hop nodes subsequently divide the network into layers and cluster in each layer. Each cluster head node constructs a connected path to the sink node to guarantee network connectivity. Finally, the cluster head node regards the ratio of the movement distance of the node to the change in the coverage redundancy ratio as the target function and employs pigeon swarm optimization to determine the positions of the nodes. Simulation results show that PSA improves both network connectivity and network reliability, decreases network deployment energy consumption, and increases network coverage. PMID:28338615

  7. Traffic sharing algorithms for hybrid mobile networks

    NASA Technical Reports Server (NTRS)

    Arcand, S.; Murthy, K. M. S.; Hafez, R.

    1995-01-01

    In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.

  8. Application of laser-induced autofluorescence spectra detection system in human colorectal cancer in-vivo screening

    NASA Astrophysics Data System (ADS)

    Chia, Teck Chee; Fu, Sheng; Chia, Yee Hong; Kwek, Leong Chuan; Tang, Choong Leong

    2005-09-01

    This study aimed at applying Laser induced-autofluorescence (LIAF) diagnostics method as an in-vivo screening of colorectal polyplcancer. The spectrum algorithm based on the ratio of autofluorescence intensity was used to identify the diseased tissues from the normal tissues as it was generally performed better than an algorithm based only simply on the intensity of the spectrum. Histopathological biopsy results were compared with the detected AF spectra characteristics for different kinds of polyps. 73 patients had been examined via the LIAF spectroscopy detection system during their colonoscopy screening in Endoscopy Center, Singapore General Hospital. The autofluorescence from the surface of the colorectal tissues under 405 nm laser light excitation was detected using our detecting system. In the experimental investigation two groups of patients were involved. One group was "abnormal" group. There were 25 patients belonging to this group since polyps or carcinoma was found in their colorectal tract during colonoscopy. The histopathology reports confirm the group classification. Total 36 polyps' AF spectra and 9 carcinoma' AF spectra were detected from 25 patients of the abnormal group during their regular endoscopy examination. The intensity ratios RI-680/I-500 and RI-630/I-500 of polyps/cancerous AF spectra and intensity ratios of corresponding normal colorectal AF spectra were calculated. Two critical intensity ratios for separating the AF intensity ratios RI-680/I-500 and RI-630/I-500 of normal and abnormal colorectal tissues were defined as 0.5 and 0.6 respectively. Using the critical intensity ratio values, 48 "normal" group patients' rectums were checked via the LIAF detection system. There were 20 patients (41.7%) whose AF spectra of colorectal tract mucosa belonging to abnormal spectra. However, these 20 patients had not been found under white light via traditional endoscopy. For small diseased area like small plat polyp disease and carcinoma, it was very difficult to identify under white light by endoscopy. However, the LIAF spectra technique and AF intensity ratio algorithm was able to detect these kinds of abnormal area earlier than traditional endoscopy. Using this algorithm, it is able to identify the onset of abnormal tissue growth during real-time clinical endoscope examination.

  9. A modified 3D algorithm for road traffic noise attenuation calculations in large urban areas.

    PubMed

    Wang, Haibo; Cai, Ming; Yao, Yifan

    2017-07-01

    The primary objective of this study is the development and application of a 3D road traffic noise attenuation calculation algorithm. First, the traditional empirical method does not address problems caused by non-direct occlusion by buildings and the different building heights. In contrast, this study considers the volume ratio of the buildings and the area ratio of the projection of buildings adjacent to the road. The influence of the ground affection is analyzed. The insertion loss due to barriers (infinite length and finite barriers) is also synthesized in the algorithm. Second, the impact of different road segmentation is analyzed. Through the case of Pearl River New Town, it is recommended that 5° is the most appropriate scanning angle as the computational time is acceptable and the average error is approximately 3.1 dB. In addition, the algorithm requires only 1/17 of the time that the beam tracking method requires at the cost of more imprecise calculation results. Finally, the noise calculation for a large urban area with a high density of buildings shows the feasibility of the 3D noise attenuation calculation algorithm. The algorithm is expected to be applied in projects requiring large area noise simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  11. Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control

    NASA Technical Reports Server (NTRS)

    Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.

    2005-01-01

    A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.

  12. A novel consistent and well-balanced algorithm for simulations of multiphase flows on unstructured grids

    NASA Astrophysics Data System (ADS)

    Patel, Jitendra Kumar; Natarajan, Ganesh

    2017-12-01

    We discuss the development and assessment of a robust numerical algorithm for simulating multiphase flows with complex interfaces and high density ratios on arbitrary polygonal meshes. The algorithm combines the volume-of-fluid method with an incremental projection approach for incompressible multiphase flows in a novel hybrid staggered/non-staggered framework. The key principles that characterise the algorithm are the consistent treatment of discrete mass and momentum transport and the similar discretisation of force terms appearing in the momentum equation. The former is achieved by invoking identical schemes for convective transport of volume fraction and momentum in the respective discrete equations while the latter is realised by representing the gravity and surface tension terms as gradients of suitable scalars which are then discretised in identical fashion resulting in a balanced formulation. The hybrid staggered/non-staggered framework employed herein solves for the scalar normal momentum at the cell faces, while the volume fraction is computed at the cell centroids. This is shown to naturally lead to similar terms for pressure and its correction in the momentum and pressure correction equations respectively, which are again treated discretely in a similar manner. We show that spurious currents that corrupt the solution may arise both from an unbalanced formulation where forces (gravity and surface tension) are discretised in dissimilar manner and from an inconsistent approach where different schemes are used to convect the mass and momentum, with the latter prominent in flows which are convection-dominant with high density ratios. Interestingly, the inconsistent approach is shown to perform as well as the consistent approach even for high density ratio flows in some cases while it exhibits anomalous behaviour for other scenarios, even at low density ratios. Using a plethora of test problems of increasing complexity, we conclusively demonstrate that the consistent transport and balanced force treatment results in a numerically stable solution procedure and physically consistent results. The algorithm proposed in this study qualifies as a robust approach to simulate multiphase flows with high density ratios on unstructured meshes and may be realised in existing flow solvers with relative ease.

  13. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  14. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    NASA Astrophysics Data System (ADS)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  15. Focusing light through random scattering media by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-01-01

    The focusing of light through random scattering materials using wavefront shaping is studied in detail. We propose a newfangled approach namely four-element division algorithm to improve the average convergence rate and signal-to-noise ratio of focusing. Using 4096 independently controlled segments of light, the intensity at the target is 72 times enhanced over the original intensity at the same position. The four-element division algorithm and existing phase control algorithms of focusing through scattering media are compared by both of the numerical simulation and the experiment. It is found that four-element division algorithm is particularly advantageous to improve the average convergence rate of focusing.

  16. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  17. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  18. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  19. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    NASA Astrophysics Data System (ADS)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  20. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  1. Evaluation of coastal zone color scanner diffuse attenuation coefficient algorithms for application to coastal waters

    NASA Astrophysics Data System (ADS)

    Mueller, James L.; Trees, Charles C.; Arnone, Robert A.

    1990-09-01

    The Coastal Zone Color Scannez (ZCS) and associated atmospheric and in-water algorithms have allowed synoptic analyses of regional and large scale variability of bio-optical properties [phytoplankton pigments and diffuse auenuation coefficient K(490)}. Austin and Petzold (1981) developed a robust in-water K(490) algorithm which related the diffuse attenuation coefficient at one optical depth [1/K(490)] to the ratio of the water-leaving radiances at 443 and 550 nm. Their regression analysis included diffuse attenuation coefficients K(490) up to 0.40 nm, but excluded data from estuarine areas, and other Case II waters, where the optical properties are not predominantly determined by phytoplankton. In these areas, errors are induced in the retrieval of remote sensing K(490) by extremely low water-leaving radiance at 443 nm [Lw(443) as viewed at the sensor may only be 1 or 2 digital counts], and improved cury can be realized using algorithms based on wavelengths where Lw(λ) is larger. Using ocean optical profiles quired by the Visibility Laboratory, algorithms are developed to predict K(490) from ratios of water leaving radiances at 520 and 670, as well as 443 and 550 nm.

  2. High-speed and high-ratio referential genome compression.

    PubMed

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Foundations for statistical-physical precipitation retrieval from passive microwave satellite measurements. I - Brightness-temperature properties of a time-dependent cloud-radiation model

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu

    1992-01-01

    The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.

  4. A complex symbol signal-to-noise ratio estimator and its performance

    NASA Technical Reports Server (NTRS)

    Feria, Y.

    1994-01-01

    This article presents an algorithm for estimating the signal-to-noise ratio (SNR) of signals that contain data on a downconverted suppressed carrier or the first harmonic of a square-wave subcarrier. This algorithm can be used to determine the performance of the full-spectrum combiner for the Galileo S-band (2.2- to 2.3-GHz) mission by measuring the input and output symbol SNR. A performance analysis of the algorithm shows that the estimator can estimate the complex symbol SNR using 10,000 symbols at a true symbol SNR of -5 dB with a mean of -4.9985 dB and a standard deviation of 0.2454 dB, and these analytical results are checked by simulations of 100 runs with a mean of -5.06 dB and a standard deviation of 0.2506 dB.

  5. Implementation and performance of shutterless uncooled micro-bolometer cameras

    NASA Astrophysics Data System (ADS)

    Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.

    2015-06-01

    A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.

  6. GPS Modeling and Analysis. Summary of Research: GPS Satellite Axial Ratio Predictions

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; Reeh, Lisa

    2002-01-01

    This report outlines the algorithms developed at the Colorado Center for Astrodynamics Research to model yaw and predict the axial ratio as measured from a ground station. The algorithms are implemented in a collection of Matlab functions and scripts that read certain user input, such as ground station coordinates, the UTC time, and the desired GPS (Global Positioning System) satellites, and compute the above-mentioned parameters. The position information for the GPS satellites is obtained from Yuma almanac files corresponding to the prescribed date. The results are displayed graphically through time histories and azimuth-elevation plots.

  7. Focusing light through random photonic layers by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-02-01

    The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.

  8. Model-free iterative control of repetitive dynamics for high-speed scanning in atomic force microscopy.

    PubMed

    Li, Yang; Bechhoefer, John

    2009-01-01

    We introduce an algorithm for calculating, offline or in real time and with no explicit system characterization, the feedforward input required for repetitive motions of a system. The algorithm is based on the secant method of numerical analysis and gives accurate motion at frequencies limited only by the signal-to-noise ratio and the actuator power and range. We illustrate the secant-solver algorithm on a stage used for atomic force microscopy.

  9. Context-Sensitive Grammar Transform: Compression and Pattern Matching

    NASA Astrophysics Data System (ADS)

    Maruyama, Shirou; Tanaka, Youhei; Sakamoto, Hiroshi; Takeda, Masayuki

    A framework of context-sensitive grammar transform for speeding-up compressed pattern matching (CPM) is proposed. A greedy compression algorithm with the transform model is presented as well as a Knuth-Morris-Pratt (KMP)-type compressed pattern matching algorithm. The compression ratio is a match for gzip and Re-Pair, and the search speed of our CPM algorithm is almost twice faster than the KMP-type CPM algorithm on Byte-Pair-Encoding by Shibata et al.[18], and in the case of short patterns, faster than the Boyer-Moore-Horspool algorithm with the stopper encoding by Rautio et al.[14], which is regarded as one of the best combinations that allows a practically fast search.

  10. Path connectivity based spectral defragmentation in flexible bandwidth networks.

    PubMed

    Wang, Ying; Zhang, Jie; Zhao, Yongli; Zhang, Jiawei; Zhao, Jie; Wang, Xinbo; Gu, Wanyi

    2013-01-28

    Optical networks with flexible bandwidth provisioning have become a very promising networking architecture. It enables efficient resource utilization and supports heterogeneous bandwidth demands. In this paper, two novel spectrum defragmentation approaches, i.e. Maximum Path Connectivity (MPC) algorithm and Path Connectivity Triggering (PCT) algorithm, are proposed based on the notion of Path Connectivity, which is defined to represent the maximum variation of node switching ability along the path in flexible bandwidth networks. A cost-performance-ratio based profitability model is given to denote the prons and cons of spectrum defragmentation. We compare these two proposed algorithms with non-defragmentation algorithm in terms of blocking probability. Then we analyze the differences of defragmentation profitability between MPC and PCT algorithms.

  11. Speckle imaging techniques of the turbulence degraded images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Huang, Zongfu; Mao, Hongjun; Liang, Yonghui

    2018-03-01

    We propose a speckle imaging algorithm in which we use the improved form of spectral ratio to obtain the Fried parameter, we also use a filter to reduce the high frequency noise effects. Our algorithm makes an improvement in the quality of the reconstructed images. The performance is illustrated by computer simulations.

  12. MAC Protocol for Ad Hoc Networks Using a Genetic Algorithm

    PubMed Central

    Elizarraras, Omar; Panduro, Marco; Méndez, Aldo L.

    2014-01-01

    The problem of obtaining the transmission rate in an ad hoc network consists in adjusting the power of each node to ensure the signal to interference ratio (SIR) and the energy required to transmit from one node to another is obtained at the same time. Therefore, an optimal transmission rate for each node in a medium access control (MAC) protocol based on CSMA-CDMA (carrier sense multiple access-code division multiple access) for ad hoc networks can be obtained using evolutionary optimization. This work proposes a genetic algorithm for the transmission rate election considering a perfect power control, and our proposition achieves improvement of 10% compared with the scheme that handles the handshaking phase to adjust the transmission rate. Furthermore, this paper proposes a genetic algorithm that solves the problem of power combining, interference, data rate, and energy ensuring the signal to interference ratio in an ad hoc network. The result of the proposed genetic algorithm has a better performance (15%) compared to the CSMA-CDMA protocol without optimizing. Therefore, we show by simulation the effectiveness of the proposed protocol in terms of the throughput. PMID:25140339

  13. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  14. Periodic modulation-based stochastic resonance algorithm applied to quantitative analysis for weak liquid chromatography-mass spectrometry signal of granisetron in plasma

    NASA Astrophysics Data System (ADS)

    Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei

    2007-05-01

    The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.

  15. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  16. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  17. SnapDock—template-based docking by Geometric Hashing

    PubMed Central

    Estrin, Michael; Wolfson, Haim J.

    2017-01-01

    Abstract Motivation: A highly efficient template-based protein–protein docking algorithm, nicknamed SnapDock, is presented. It employs a Geometric Hashing-based structural alignment scheme to align the target proteins to the interfaces of non-redundant protein–protein interface libraries. Docking of a pair of proteins utilizing the 22 600 interface PIFACE library is performed in < 2 min on the average. A flexible version of the algorithm allowing hinge motion in one of the proteins is presented as well. Results: To evaluate the performance of the algorithm a blind re-modelling of 3547 PDB complexes, which have been uploaded after the PIFACE publication has been performed with success ratio of about 35%. Interestingly, a similar experiment with the template free PatchDock docking algorithm yielded a success rate of about 23% with roughly 1/3 of the solutions different from those of SnapDock. Consequently, the combination of the two methods gave a 42% success ratio. Availability and implementation: A web server of the application is under development. Contact: michaelestrin@gmail.com or wolfson@tau.ac.il PMID:28881968

  18. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  19. K-edge ratio method for identification of multiple nanoparticulate contrast agents by spectral CT imaging

    PubMed Central

    Ghadiri, H; Ay, M R; Shiran, M B; Soltanian-Zadeh, H

    2013-01-01

    Objective: Recently introduced energy-sensitive X-ray CT makes it feasible to discriminate different nanoparticulate contrast materials. The purpose of this work is to present a K-edge ratio method for differentiating multiple simultaneous contrast agents using spectral CT. Methods: The ratio of two images relevant to energy bins straddling the K-edge of the materials is calculated using an analytic CT simulator. In the resulting parametric map, the selected contrast agent regions can be identified using a thresholding algorithm. The K-edge ratio algorithm is applied to spectral images of simulated phantoms to identify and differentiate up to four simultaneous and targeted CT contrast agents. Results: We show that different combinations of simultaneous CT contrast agents can be identified by the proposed K-edge ratio method when energy-sensitive CT is used. In the K-edge parametric maps, the pixel values for biological tissues and contrast agents reach a maximum of 0.95, whereas for the selected contrast agents, the pixel values are larger than 1.10. The number of contrast agents that can be discriminated is limited owing to photon starvation. For reliable material discrimination, minimum photon counts corresponding to 140 kVp, 100 mAs and 5-mm slice thickness must be used. Conclusion: The proposed K-edge ratio method is a straightforward and fast method for identification and discrimination of multiple simultaneous CT contrast agents. Advances in knowledge: A new spectral CT-based algorithm is proposed which provides a new concept of molecular CT imaging by non-iteratively identifying multiple contrast agents when they are simultaneously targeting different organs. PMID:23934964

  20. Evaluation of Building Energy Saving Through the Development of Venetian Blinds' Optimal Control Algorithm According to the Orientation and Window-to-Wall Ratio

    NASA Astrophysics Data System (ADS)

    Kwon, Hyuk Ju; Yeon, Sang Hun; Lee, Keum Ho; Lee, Kwang Ho

    2018-02-01

    As various studies focusing on building energy saving have been continuously conducted, studies utilizing renewable energy sources, instead of fossil fuel, are needed. In particular, studies regarding solar energy are being carried out in the field of building science; in order to utilize such solar energy effectively, solar radiation being brought into the indoors should be acquired and blocked properly. Blinds are a typical solar radiation control device that is capable of controlling indoor thermal and light environments. However, slat-type blinds are manually controlled, giving a negative effect on building energy saving. In this regard, studies regarding the automatic control of slat-type blinds have been carried out for the last couple of decades. Therefore, this study aims to provide preliminary data for optimal control research through the controlling of slat angle in slat-type blinds by comprehensively considering various input variables. The window area ratio and orientation were selected as input variables. It was found that an optimal control algorithm was different among each window-to-wall ratio and window orientation. In addition, through comparing and analyzing the building energy saving performance for each condition by applying the developed algorithms to simulations, up to 20.7 % energy saving was shown in the cooling period and up to 12.3 % energy saving was shown in the heating period. In addition, building energy saving effect was greater as the window area ratio increased given the same orientation, and the effects of window-to-wall ratio in the cooling period were higher than those of window-to-wall ratio in the heating period.

  1. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  2. Mitigate the impact of transmitter finite extinction ratio using K-means clustering algorithm for 16QAM signal

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Yan; Shu, Tong; Zhang, Yifan; Hong, Xiaobin; Qiu, Jifang; Zuo, Yong; Guo, Hongxiang; Li, Wei; Wu, Jian

    2018-02-01

    A method of recognizing 16QAM signal based on k-means clustering algorithm is proposed to mitigate the impact of transmitter finite extinction ratio. There are pilot symbols with 0.39% overhead assigned to be regarded as initial centroids of k-means clustering algorithm. Simulation result in 10 GBaud 16QAM system shows that the proposed method obtains higher precision of identification compared with traditional decision method for finite ER and IQ mismatch. Specially, the proposed method improves the required OSNR by 5.5 dB, 4.5 dB, 4 dB and 3 dB at FEC limit with ER= 12 dB, 16 dB, 20 dB and 24 dB, respectively, and the acceptable bias error and IQ mismatch range is widened by 767% and 360% with ER =16 dB, respectively.

  3. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  4. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  5. Optimal marker placement in hadrontherapy: intelligent optimization strategies with augmented Lagrangian pattern search.

    PubMed

    Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido

    2015-02-01

    In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Application of cluster analysis to geochemical compositional data for identifying ore-related geochemical anomalies

    NASA Astrophysics Data System (ADS)

    Zhou, Shuguang; Zhou, Kefa; Wang, Jinlin; Yang, Genfang; Wang, Shanshan

    2017-12-01

    Cluster analysis is a well-known technique that is used to analyze various types of data. In this study, cluster analysis is applied to geochemical data that describe 1444 stream sediment samples collected in northwestern Xinjiang with a sample spacing of approximately 2 km. Three algorithms (the hierarchical, k-means, and fuzzy c-means algorithms) and six data transformation methods (the z-score standardization, ZST; the logarithmic transformation, LT; the additive log-ratio transformation, ALT; the centered log-ratio transformation, CLT; the isometric log-ratio transformation, ILT; and no transformation, NT) are compared in terms of their effects on the cluster analysis of the geochemical compositional data. The study shows that, on the one hand, the ZST does not affect the results of column- or variable-based (R-type) cluster analysis, whereas the other methods, including the LT, the ALT, and the CLT, have substantial effects on the results. On the other hand, the results of the row- or observation-based (Q-type) cluster analysis obtained from the geochemical data after applying NT and the ZST are relatively poor. However, we derive some improved results from the geochemical data after applying the CLT, the ILT, the LT, and the ALT. Moreover, the k-means and fuzzy c-means clustering algorithms are more reliable than the hierarchical algorithm when they are used to cluster the geochemical data. We apply cluster analysis to the geochemical data to explore for Au deposits within the study area, and we obtain a good correlation between the results retrieved by combining the CLT or the ILT with the k-means or fuzzy c-means algorithms and the potential zones of Au mineralization. Therefore, we suggest that the combination of the CLT or the ILT with the k-means or fuzzy c-means algorithms is an effective tool to identify potential zones of mineralization from geochemical data.

  7. An Informative Interpretation of Decision Theory: The Information Theoretic Basis for Signal-to-Noise Ratio and Log Likelihood Ratio

    DOE PAGES

    Polcari, J.

    2013-08-16

    The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less

  8. A privacy-preserving parallel and homomorphic encryption scheme

    NASA Astrophysics Data System (ADS)

    Min, Zhaoe; Yang, Geng; Shi, Jingqi

    2017-04-01

    In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE) scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.

  9. Deconvolution of noisy transient signals: a Kalman filtering application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Zicker, J.E.

    The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.

  10. Model-based Bayesian signal extraction algorithm for peripheral nerves

    NASA Astrophysics Data System (ADS)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.

  11. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems

    NASA Astrophysics Data System (ADS)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.

    2018-01-01

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  12. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    PubMed

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  <  ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  13. An Event-Based Verification Scheme for the Real-Time Flare Detection System at Kanzelhöhe Observatory

    NASA Astrophysics Data System (ADS)

    Pötzi, W.; Veronig, A. M.; Temmer, M.

    2018-06-01

    In the framework of the Space Situational Awareness program of the European Space Agency (ESA/SSA), an automatic flare detection system was developed at Kanzelhöhe Observatory (KSO). The system has been in operation since mid-2013. The event detection algorithm was upgraded in September 2017. All data back to 2014 was reprocessed using the new algorithm. In order to evaluate both algorithms, we apply verification measures that are commonly used for forecast validation. In order to overcome the problem of rare events, which biases the verification measures, we introduce a new event-based method. We divide the timeline of the Hα observations into positive events (flaring period) and negative events (quiet period), independent of the length of each event. In total, 329 positive and negative events were detected between 2014 and 2016. The hit rate for the new algorithm reached 96% (just five events were missed) and a false-alarm ratio of 17%. This is a significant improvement of the algorithm, as the original system had a hit rate of 85% and a false-alarm ratio of 33%. The true skill score and the Heidke skill score both reach values of 0.8 for the new algorithm; originally, they were at 0.5. The mean flare positions are accurate within {±} 1 heliographic degree for both algorithms, and the peak times improve from a mean difference of 1.7± 2.9 minutes to 1.3± 2.3 minutes. The flare start times that had been systematically late by about 3 minutes as determined by the original algorithm, now match the visual inspection within -0.47± 4.10 minutes.

  14. Pharmacogenetics-based warfarin dosing algorithm decreases time to stable anticoagulation and the risk of major hemorrhage: an updated meta-analysis of randomized controlled trials.

    PubMed

    Wang, Zhi-Quan; Zhang, Rui; Zhang, Peng-Pai; Liu, Xiao-Hong; Sun, Jian; Wang, Jun; Feng, Xiang-Fei; Lu, Qiu-Fen; Li, Yi-Gang

    2015-04-01

    Warfarin is yet the most widely used oral anticoagulant for thromboembolic diseases, despite the recently emerged novel anticoagulants. However, difficulty in maintaining stable dose within the therapeutic range and subsequent serious adverse effects markedly limited its use in clinical practice. Pharmacogenetics-based warfarin dosing algorithm is a recently emerged strategy to predict the initial and maintaining dose of warfarin. However, whether this algorithm is superior over conventional clinically guided dosing algorithm remains controversial. We made a comparison of pharmacogenetics-based versus clinically guided dosing algorithm by an updated meta-analysis. We searched OVID MEDLINE, EMBASE, and the Cochrane Library for relevant citations. The primary outcome was the percentage of time in therapeutic range. The secondary outcomes were time to stable therapeutic dose and the risks of adverse events including all-cause mortality, thromboembolic events, total bleedings, and major bleedings. Eleven randomized controlled trials with 2639 participants were included. Our pooled estimates indicated that pharmacogenetics-based dosing algorithm did not improve percentage of time in therapeutic range [weighted mean difference, 4.26; 95% confidence interval (CI), -0.50 to 9.01; P = 0.08], but it significantly shortened the time to stable therapeutic dose (weighted mean difference, -8.67; 95% CI, -11.86 to -5.49; P < 0.00001). Additionally, pharmacogenetics-based algorithm significantly reduced the risk of major bleedings (odds ratio, 0.48; 95% CI, 0.23 to 0.98; P = 0.04), but it did not reduce the risks of all-cause mortality, total bleedings, or thromboembolic events. Our results suggest that pharmacogenetics-based warfarin dosing algorithm significantly improves the efficiency of International Normalized Ratio correction and reduces the risk of major hemorrhage.

  15. AccuTyping: new algorithms for automated analysis of data from high-throughput genotyping with oligonucleotide microarrays

    PubMed Central

    Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua

    2006-01-01

    Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644

  16. C-band Joint Active/Passive Dual Polarization Sea Ice Detection

    NASA Astrophysics Data System (ADS)

    Keller, M. R.; Gifford, C. M.; Winstead, N. S.; Walton, W. C.; Dietz, J. E.

    2017-12-01

    A technique for synergistically-combining high-resolution SAR returns with like-frequency passive microwave emissions to detect thin (<30 cm) ice under the difficult conditions of late melt and freeze-up is presented. As the Arctic sea ice cover thins and shrinks, the algorithm offers an approach to adapting existing sensors monitoring thicker ice to provide continuing coverage. Lower resolution (10-26 km) ice detections with spaceborne radiometers and scatterometers are challenged by rapidly changing thin ice. Synthetic Aperture Radar (SAR) is high resolution (5-100m) but because of cross section ambiguities automated algorithms have had difficulty separating thin ice types from water. The radiometric emissivity of thin ice versus water at microwave frequencies is generally unambiguous in the early stages of ice growth. The method, developed using RADARSAT-2 and AMSR-E data, uses higher-ordered statistics. For the SAR, the COV (coefficient of variation, ratio of standard deviation to mean) has fewer ambiguities between ice and water than cross sections, but breaking waves still produce ice-like signatures for both polarizations. For the radiometer, the PRIC (polarization ratio ice concentration) identifies areas that are unambiguously water. Applying cumulative statistics to co-located COV levels adaptively determines an ice/water threshold. Outcomes from extensive testing with Sentinel and AMSR-2 data are shown in the results. The detection algorithm was applied to the freeze-up in the Beaufort, Chukchi, Barents, and East Siberian Seas in 2015 and 2016, spanning mid-September to early November of both years. At the end of the melt, 6 GHz PRIC values are 5-10% greater than those reported by radiometric algorithms at 19 and 37 GHz. During freeze-up, COV separates grease ice (<5 cm thick) from water. As the ice thickens, the COV is less reliable, but adding a mask based on either the PRIC or the cross-pol/co-pol SAR ratio corrects for COV deficiencies. In general, the dual-sensor detection algorithm reports 10-15% higher total ice concentrations than operational scatterometer or radiometer algorithms, mostly from ice edge and coastal areas. In conclusion, the algorithm presented combines high-resolution SAR returns with passive microwave emissions for automated ice detection at SAR resolutions.

  17. Combinatorial Algorithms for Portfolio Optimization Problems - Case of Risk Moderate Investor

    NASA Astrophysics Data System (ADS)

    Juarna, A.

    2017-03-01

    Portfolio optimization problem is a problem of finding optimal combination of n stocks from N ≥ n available stocks that gives maximal aggregate return and minimal aggregate risk. In this paper given N = 43 from the IDX (Indonesia Stock Exchange) group of the 45 most-traded stocks, known as the LQ45, with p = 24 data of monthly returns for each stock, spanned over interval 2013-2014. This problem actually is a combinatorial one where its algorithm is constructed based on two considerations: risk moderate type of investor and maximum allowed correlation coefficient between every two eligible stocks. The main outputs resulted from implementation of the algorithms is a multiple curve of three portfolio’s attributes, e.g. the size, the ratio of return to risk, and the percentage of negative correlation coefficient for every two chosen stocks, as function of maximum allowed correlation coefficient between each two stocks. The output curve shows that the portfolio contains three stocks with ratio of return to risk at 14.57 if the maximum allowed correlation coefficient between every two eligible stocks is negative and contains 19 stocks with maximum allowed correlation coefficient 0.17 to get maximum ratio of return to risk at 25.48.

  18. Statistical Quality Control of Moisture Data in GEOS DAS

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Rukhovets, L.; Todling, R.

    1999-01-01

    A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.

  19. A Pulse Coupled Neural Network Segmentation Algorithm for Reflectance Confocal Images of Epithelial Tissue

    PubMed Central

    Malik, Bilal H.; Jabbour, Joey M.; Maitland, Kristen C.

    2015-01-01

    Automatic segmentation of nuclei in reflectance confocal microscopy images is critical for visualization and rapid quantification of nuclear-to-cytoplasmic ratio, a useful indicator of epithelial precancer. Reflectance confocal microscopy can provide three-dimensional imaging of epithelial tissue in vivo with sub-cellular resolution. Changes in nuclear density or nuclear-to-cytoplasmic ratio as a function of depth obtained from confocal images can be used to determine the presence or stage of epithelial cancers. However, low nuclear to background contrast, low resolution at greater imaging depths, and significant variation in reflectance signal of nuclei complicate segmentation required for quantification of nuclear-to-cytoplasmic ratio. Here, we present an automated segmentation method to segment nuclei in reflectance confocal images using a pulse coupled neural network algorithm, specifically a spiking cortical model, and an artificial neural network classifier. The segmentation algorithm was applied to an image model of nuclei with varying nuclear to background contrast. Greater than 90% of simulated nuclei were detected for contrast of 2.0 or greater. Confocal images of porcine and human oral mucosa were used to evaluate application to epithelial tissue. Segmentation accuracy was assessed using manual segmentation of nuclei as the gold standard. PMID:25816131

  20. Determination of the actual evapotranspiration by using remote sensing methods

    NASA Astrophysics Data System (ADS)

    Bora, Eser

    2017-10-01

    Evapotranspiration is so crucial for determining amount of the irrigation and the effective water management planning. Moreover, it is vital for determining agricultural drought management and determination the actual evapotranspiration ın a region is critical for early drought warning systems. The main object of this study was to assess accuracy of the remote sensing method (METRIC) by calibrating with the bowen ratio observations at the same time. The research was carried out in the west of Marmara Region, Turkey. Landsat 5 images was used to determine the metric algorithm. By using this algorithms are found. Landsat 5 images file were used to determine actual evapotranspiration and the image's date was June 11 in 2010. This date was used for calibration with available terrestrial observation by using bowen ratio in that time. Landsat images obtained from the web site, earthexplorer.usgs.gov, and results of bowen ratio taken from micrometeorology station. As a result, energy balance parameters that are net radiation, soil heat flux and latent heat flux were compared both metric algorithm and the bowen ration in the images time. The results are found so close to each other.

  1. Differences in spirometry interpretation algorithms: influence on decision making among primary-care physicians.

    PubMed

    He, Xiao-Ou; D'Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan

    2015-03-12

    Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. We examined how two different SIAs may influence decision making among primary-care physicians. Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a 'normal' interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently.

  2. Reliability and validity of bilateral ankle accelerometer algorithms for activity recognition and walking speed after stroke.

    PubMed

    Dobkin, Bruce H; Xu, Xiaoyu; Batalin, Maxim; Thomas, Seth; Kaiser, William

    2011-08-01

    Outcome measures of mobility for large stroke trials are limited to timed walks for short distances in a laboratory, step counters and ordinal scales of disability and quality of life. Continuous monitoring and outcome measurements of the type and quantity of activity in the community would provide direct data about daily performance, including compliance with exercise and skills practice during routine care and clinical trials. Twelve adults with impaired ambulation from hemiparetic stroke and 6 healthy controls wore triaxial accelerometers on their ankles. Walking speed for repeated outdoor walks was determined by machine-learning algorithms and compared to a stopwatch calculation of speed for distances not known to the algorithm. The reliability of recognizing walking, exercise, and cycling by the algorithms was compared to activity logs. A high correlation was found between stopwatch-measured outdoor walking speed and algorithm-calculated speed (Pearson coefficient, 0.98; P=0.001) and for repeated measures of algorithm-derived walking speed (P=0.01). Bouts of walking >5 steps, variations in walking speed, cycling, stair climbing, and leg exercises were correctly identified during a day in the community. Compared to healthy subjects, those with stroke were, as expected, more sedentary and slower, and their gait revealed high paretic-to-unaffected leg swing ratios. Test-retest reliability and concurrent and construct validity are high for activity pattern-recognition Bayesian algorithms developed from inertial sensors. This ratio scale data can provide real-world monitoring and outcome measurements of lower extremity activities and walking speed for stroke and rehabilitation studies.

  3. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  4. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  5. Detection of spontaneous vesicle release at individual synapses using multiple wavelets in a CWT-based algorithm.

    PubMed

    Sokoll, Stefan; Tönnies, Klaus; Heine, Martin

    2012-01-01

    In this paper we present an algorithm for the detection of spontaneous activity at individual synapses in microscopy images. By employing the optical marker pHluorin, we are able to visualize synaptic vesicle release with a spatial resolution in the nm range in a non-invasive manner. We compute individual synaptic signals from automatically segmented regions of interest and detect peaks that represent synaptic activity using a continuous wavelet transform based algorithm. As opposed to standard peak detection algorithms, we employ multiple wavelets to match all relevant features of the peak. We evaluate our multiple wavelet algorithm (MWA) on real data and assess the performance on synthetic data over a wide range of signal-to-noise ratios.

  6. Tile-based Fisher ratio analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) data using a null distribution approach.

    PubMed

    Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E

    2015-04-07

    Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.

  7. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    NASA Astrophysics Data System (ADS)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  8. Reversible Data Hiding Based on DNA Computing

    PubMed Central

    Xie, Yingjie

    2017-01-01

    Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504

  9. Pilot validation of an individualised pharmacokinetic algorithm for protamine dosing after systemic heparinisation for cardiopulmonary bypass.

    PubMed

    Miles, Lachlan F; Marchiori, Paolo; Falter, Florian

    2017-09-01

    This manuscript represents a pilot study assessing the feasibility of a single-compartment, individualised, pharmacokinetic algorithm for protamine dosing after cardiopulmonary bypass. A pilot cohort study in a specialist NHS cardiothoracic hospital targeting patients undergoing elective cardiac surgery using cardiopulmonary bypass. Patients received protamine doses according to a pharmacokinetic algorithm (n = 30) or using an empirical, fixed-dose model (n = 30). Categorical differences between the groups were evaluated using the Chi-squared test or Fisher's exact test. Continuous data was analysed using a paired Student's t-test for parametric data and the paired samples Wilcoxon test for non-parametric data. Patients who had protamine dosing according to the algorithm demonstrated a lower protamine requirement post-bypass relative to empirical management as measured by absolute dose (243 ± 49mg vs. 305 ± 34.7mg; p<0.001) and the heparin to protamine ratio (0.79 ± 0.12 vs. 1.1 ± 0.15; p<0.001). There was no difference in the pre- to post-bypass activated clotting time (ACT) ratio (1.05 ± 0.12 vs. 1.02 ± 0.15; p=0.9). Patients who received protamine according to the algorithm had no significant difference in transfusion requirement (13.3% vs. 30.0%; p=0.21). This study showed that an individualized pharmacokinetic algorithm for the reversal of heparin after cardiopulmonary bypass is feasible in comparison with a fixed dosing strategy and may reduce the protamine requirement following on-pump cardiac surgery.

  10. A versatile pitch tracking algorithm: from human speech to killer whale vocalizations.

    PubMed

    Shapiro, Ari Daniel; Wang, Chao

    2009-07-01

    In this article, a pitch tracking algorithm [named discrete logarithmic Fourier transformation-pitch detection algorithm (DLFT-PDA)], originally designed for human telephone speech, was modified for killer whale vocalizations. The multiple frequency components of some of these vocalizations demand a spectral (rather than temporal) approach to pitch tracking. The DLFT-PDA algorithm derives reliable estimations of pitch and the temporal change of pitch from the harmonic structure of the vocal signal. Scores from both estimations are combined in a dynamic programming search to find a smooth pitch track. The algorithm is capable of tracking killer whale calls that contain simultaneous low and high frequency components and compares favorably across most signal to noise ratio ranges to the peak-picking and sidewinder algorithms that have been used for tracking killer whale vocalizations previously.

  11. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  12. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  13. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  14. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of the ratio algorithm. The differences in the characteristics of the two satellite imageries in terms of spatial and spectral resolution can play an important role in the estimation and detection of the shadow of urban objects.

  15. Wavelet-based watermarking and compression for ECG signals with verification evaluation.

    PubMed

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-02-21

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.

  16. Structural optimization procedure of a composite wind turbine blade for reducing both material cost and blade weight

    NASA Astrophysics Data System (ADS)

    Hu, Weifei; Park, Dohyun; Choi, DongHoon

    2013-12-01

    A composite blade structure for a 2 MW horizontal axis wind turbine is optimally designed. Design requirements are simultaneously minimizing material cost and blade weight while satisfying the constraints on stress ratio, tip deflection, fatigue life and laminate layup requirements. The stress ratio and tip deflection under extreme gust loads and the fatigue life under a stochastic normal wind load are evaluated. A blade element wind load model is proposed to explain the wind pressure difference due to blade height change during rotor rotation. For fatigue life evaluation, the stress result of an implicit nonlinear dynamic analysis under a time-varying fluctuating wind is converted to the histograms of mean and amplitude of maximum stress ratio using the rainflow counting algorithm Miner's rule is employed to predict the fatigue life. After integrating and automating the whole analysis procedure an evolutionary algorithm is used to solve the discrete optimization problem.

  17. Fractional Programming for Communication Systems—Part I: Power Control and Beamforming

    NASA Astrophysics Data System (ADS)

    Shen, Kaiming; Yu, Wei

    2018-05-01

    This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.

  18. Creation of operation algorithms for combined operation of anti-lock braking system (ABS) and electric machine included in the combined power plant

    NASA Astrophysics Data System (ADS)

    Bakhmutov, S. V.; Ivanov, V. G.; Karpukhin, K. E.; Umnitsyn, A. A.

    2018-02-01

    The paper considers the Anti-lock Braking System (ABS) operation algorithm, which enables the implementation of hybrid braking, i.e. the braking process combining friction brake mechanisms and e-machine (electric machine), which operates in the energy recovery mode. The provided materials focus only on the rectilinear motion of the vehicle. That the ABS task consists in the maintenance of the target wheel slip ratio, which depends on the tyre-road adhesion coefficient. The tyre-road adhesion coefficient was defined based on the vehicle deceleration. In the course of calculated studies, the following operation algorithm of hybrid braking was determined. At adhesion coefficient ≤0.1, driving axle braking occurs only due to the e-machine operating in the energy recovery mode. In other cases, depending on adhesion coefficient, the e-machine provides the brake torque, which changes from 35 to 100% of the maximum available brake torque. Virtual tests showed that values of the wheel slip ratio are close to the required ones. Thus, this algorithm makes it possible to implement hybrid braking by means of the two sources creating the brake torque.

  19. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  20. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  1. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  2. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  3. A robust correspondence matching algorithm of ground images along the optic axis

    NASA Astrophysics Data System (ADS)

    Jia, Fengman; Kang, Zhizhong

    2013-10-01

    Facing challenges of nontraditional geometry, multiple resolutions and the same features sensed from different angles, there are more difficulties of robust correspondence matching for ground images along the optic axis. A method combining SIFT algorithm and the geometric constraint of the ratio of coordinate differences between image point and image principal point is proposed in this paper. As it can provide robust matching across a substantial range of affine distortion addition of change in 3D viewpoint and noise, we use SIFT algorithm to tackle the problem of image distortion. By analyzing the nontraditional geometry of ground image along the optic axis, this paper derivates that for one correspondence pair, the ratio of distances between image point and image principal point in an image pair should be a value not far from 1. Therefore, a geometric constraint for gross points detection is formed. The proposed approach is tested with real image data acquired by Kodak. The results show that with SIFT and the proposed geometric constraint, the robustness of correspondence matching on the ground images along the optic axis can be effectively improved, and thus prove the validity of the proposed algorithm.

  4. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  5. A novel hybrid algorithm for the design of the phase diffractive optical elements for beam shaping

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbo; Wang, Jun; Dong, Xiucheng

    2013-02-01

    In this paper, a novel hybrid algorithm for the design of a phase diffractive optical elements (PDOE) is proposed. It combines the genetic algorithm (GA) with the transformable scale BFGS (Broyden, Fletcher, Goldfarb, Shanno) algorithm, the penalty function was used in the cost function definition. The novel hybrid algorithm has the global merits of the genetic algorithm as well as the local improvement capabilities of the transformable scale BFGS algorithm. We designed the PDOE using the conventional simulated annealing algorithm and the novel hybrid algorithm. To compare the performance of two algorithms, three indexes of the diffractive efficiency, uniformity error and the signal-to-noise ratio are considered in numerical simulation. The results show that the novel hybrid algorithm has good convergence property and good stability. As an application example, the PDOE was used for the Gaussian beam shaping; high diffractive efficiency, low uniformity error and high signal-to-noise were obtained. The PDOE can be used for high quality beam shaping such as inertial confinement fusion (ICF), excimer laser lithography, fiber coupling laser diode array, laser welding, etc. It shows wide application value.

  6. Effects of speckle/pixel size ratio on temporal and spatial speckle-contrast analysis of dynamic scattering systems: Implications for measurements of blood-flow dynamics.

    PubMed

    Ramirez-San-Juan, J C; Mendez-Aguilar, E; Salazar-Hermenegildo, N; Fuentes-Garcia, A; Ramos-Garcia, R; Choi, B

    2013-01-01

    Laser Speckle Contrast Imaging (LSCI) is an optical technique used to generate blood flow maps with high spatial and temporal resolution. It is well known that in LSCI, the speckle size must exceed the Nyquist criterion to maximize the speckle's pattern contrast. In this work, we study experimentally the effect of speckle-pixel size ratio not only in dynamic speckle contrast, but also on the calculation of the relative flow speed for temporal and spatial analysis. Our data suggest that the temporal LSCI algorithm is more accurate at assessing the relative changes in flow speed than the spatial algorithm.

  7. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  8. Adaptive Noise Suppression Using Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Kozel, David; Nelson, Richard

    1996-01-01

    A signal to noise ratio dependent adaptive spectral subtraction algorithm is developed to eliminate noise from noise corrupted speech signals. The algorithm determines the signal to noise ratio and adjusts the spectral subtraction proportion appropriately. After spectra subtraction low amplitude signals are squelched. A single microphone is used to obtain both eh noise corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoice frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Applications include the emergency egress vehicle and the crawler transporter.

  9. Non-contact passive temperature measuring system and method of operation using micro-mechanical sensors

    DOEpatents

    Thundat, Thomas G.; Oden, Patrick I.; Datskos, Panagiotis G.

    2000-01-01

    A non-contact infrared thermometer measures target temperatures remotely without requiring the ratio of the target size to the target distance to the thermometer. A collection means collects and focusses target IR radiation on an IR detector. The detector measures thermal energy of the target over a spectrum using micromechanical sensors. A processor means calculates the collected thermal energy in at least two different spectral regions using a first algorithm in program form and further calculates the ratio of the thermal energy in the at least two different spectral regions to obtain the target temperature independent of the target size, distance to the target and emissivity using a second algorithm in program form.

  10. A new efficient method for color image compression based on visual attention mechanism

    NASA Astrophysics Data System (ADS)

    Shao, Xiaoguang; Gao, Kun; Lv, Lily; Ni, Guoqiang

    2010-11-01

    One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images. When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative and qualitative analysis in the paper show perfect performance when comparing with other traditional color image compression approaches.

  11. Automatic characterization and segmentation of human skin using three-dimensional optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hori, Yasuaki; Yasuno, Yoshiaki; Sakai, Shingo; Matsumoto, Masayuki; Sugawara, Tomoko; Madjarova, Violeta; Yamanari, Masahiro; Makita, Shuichi; Yasui, Takeshi; Araki, Tsutomu; Itoh, Masahide; Yatagai, Toyohiko

    2006-03-01

    A set of fully automated algorithms that is specialized for analyzing a three-dimensional optical coherence tomography (OCT) volume of human skin is reported. The algorithm set first determines the skin surface of the OCT volume, and a depth-oriented algorithm provides the mean epidermal thickness, distribution map of the epidermis, and a segmented volume of the epidermis. Subsequently, an en face shadowgram is produced by an algorithm to visualize the infundibula in the skin with high contrast. The population and occupation ratio of the infundibula are provided by a histogram-based thresholding algorithm and a distance mapping algorithm. En face OCT slices at constant depths from the sample surface are extracted, and the histogram-based thresholding algorithm is again applied to these slices, yielding a three-dimensional segmented volume of the infundibula. The dermal attenuation coefficient is also calculated from the OCT volume in order to evaluate the skin texture. The algorithm set examines swept-source OCT volumes of the skins of several volunteers, and the results show the high stability, portability and reproducibility of the algorithm.

  12. Differences in spirometry interpretation algorithms: influence on decision making among primary-care physicians

    PubMed Central

    He, Xiao-Ou; D’Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan

    2015-01-01

    Background: Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. Aims: We examined how two different SIAs may influence decision making among primary-care physicians. Methods: Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. Results: We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a ‘normal’ interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. Conclusions: This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently. PMID:25763716

  13. Blind restoration method of three-dimensional microscope image based on RL algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Jin-li; Tian, Si; Wang, Xiang-rong; Wang, Jing-li

    2013-08-01

    Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear restoration method with the maximum entropy constraint. The method doesn't need to know the PSF of the microscopy imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.

  14. Evaluation of hybrid SART  +  OS  +  TV iterative reconstruction algorithm for optical-CT gel dosimeter imaging

    NASA Astrophysics Data System (ADS)

    Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping

    2016-12-01

    Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART  +  OS  +  TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART  +  OS  +  TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART  +  OS  +  TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART  +  OS  +  TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.

  15. Increasing BCI Communication Rates with Dynamic Stopping Towards More Practical Use: An ALS Study

    PubMed Central

    Mainsah, B. O.; Collins, L. M.; Colwell, K. A.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Throckmorton, C. S.

    2015-01-01

    Objective The P300 speller is a brain-computer interface (BCI) that can possibly restore communication abilities to individuals with severe neuromuscular disabilities, such as amyotrophic lateral sclerosis (ALS), by exploiting elicited brain signals in electroencephalography data. However, accurate spelling with BCIs is slow due to the need to average data over multiple trials to increase the signal-to-noise ratio of the elicited brain signals. Probabilistic approaches to dynamically control data collection have shown improved performance in non-disabled populations; however, validation of these approaches in a target BCI user population has not occurred. Approach We have developed a data-driven algorithm for the P300 speller based on Bayesian inference that improves spelling time by adaptively selecting the number of trials based on the acute signal-to-noise ratio of a user’s electroencephalography data. We further enhanced the algorithm by incorporating information about the user’s language. In this current study, we test and validate the algorithms online in a target BCI user population, by comparing the performance of the dynamic stopping (or early stopping) algorithms against the current state-of-the-art method, static data collection, where the amount of data collected is fixed prior to online operation. Main Results Results from online testing of the dynamic stopping algorithms in participants with ALS demonstrate a significant increase in communication rate as measured in bits/sec (100-300%), and theoretical bit rate (100-550%), while maintaining selection accuracy. Participants also overwhelmingly preferred the dynamic stopping algorithms. Significance We have developed a viable BCI algorithm that has been tested in a target BCI population which has the potential for translation to improve BCI speller performance towards more practical use for communication. PMID:25588137

  16. Directional Agglomeration Multigrid Techniques for High Reynolds Number Viscous Flow Solvers

    NASA Technical Reports Server (NTRS)

    1998-01-01

    A preconditioned directional-implicit agglomeration algorithm is developed for solving two- and three-dimensional viscous flows on highly anisotropic unstructured meshes of mixed-element types. The multigrid smoother consists of a pre-conditioned point- or line-implicit solver which operates on lines constructed in the unstructured mesh using a weighted graph algorithm. Directional coarsening or agglomeration is achieved using a similar weighted graph algorithm. A tight coupling of the line construction and directional agglomeration algorithms enables the use of aggressive coarsening ratios in the multigrid algorithm, which in turn reduces the cost of a multigrid cycle. Convergence rates which are independent of the degree of grid stretching are demonstrated in both two and three dimensions. Further improvement of the three-dimensional convergence rates through a GMRES technique is also demonstrated.

  17. Directional Agglomeration Multigrid Techniques for High-Reynolds Number Viscous Flows

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A preconditioned directional-implicit agglomeration algorithm is developed for solving two- and three-dimensional viscous flows on highly anisotropic unstructured meshes of mixed-element types. The multigrid smoother consists of a pre-conditioned point- or line-implicit solver which operates on lines constructed in the unstructured mesh using a weighted graph algorithm. Directional coarsening or agglomeration is achieved using a similar weighted graph algorithm. A tight coupling of the line construction and directional agglomeration algorithms enables the use of aggressive coarsening ratios in the multigrid algorithm, which in turn reduces the cost of a multigrid cycle. Convergence rates which are independent of the degree of grid stretching are demonstrated in both two and three dimensions. Further improvement of the three-dimensional convergence rates through a GMRES technique is also demonstrated.

  18. The high performance parallel algorithm for Unified Gas-Kinetic Scheme

    NASA Astrophysics Data System (ADS)

    Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu

    2016-11-01

    A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.

  19. Fizeau interferometric cophasing of segmented mirrors: experimental validation.

    PubMed

    Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter

    2014-06-02

    We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.

  20. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  1. Wire-positioning algorithm for coreless Hall array sensors in current measurement

    NASA Astrophysics Data System (ADS)

    Chen, Wenli; Zhang, Huaiqing; Chen, Lin; Gu, Shanyun

    2018-05-01

    This paper presents a scheme of circular-arrayed, coreless Hall-effect current transformers. It can satisfy the demands of wide dynamic range and bandwidth current in the distribution system, as well as the demand of AC and DC simultaneous measurements. In order to improve the signal to noise ratio (SNR) of the sensor, a wire-positioning algorithm is proposed, which can improve the measurement accuracy based on the post-processing of measurement data. The simulation results demonstrate that the maximum errors are 70%, 6.1% and 0.95% corresponding to Ampère’s circuital method, approximate positioning algorithm and precise positioning algorithm, respectively. It is obvious that the accuracy of the positioning algorithm is significantly improved when compared with that of the Ampère’s circuital method. The maximum error of the positioning algorithm is smaller in the experiment.

  2. Evolutionary Beamforming Optimization for Radio Frequency Charging in Wireless Rechargeable Sensor Networks.

    PubMed

    Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun

    2017-08-20

    This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ , where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority.

  3. Convex Optimization over Classes of Multiparticle Entanglement

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Gühne, Otfried

    2018-02-01

    A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].

  4. Stochastic resonance algorithm applied to quantitative analysis for weak chromatographic signals of alkyl halides and alkyl benzenes in water samples.

    PubMed

    Xiang, Suyun; Wang, Wei; Xia, Jia; Xiang, Bingren; Ouyang, Pingkai

    2009-09-01

    The stochastic resonance algorithm is applied to the trace analysis of alkyl halides and alkyl benzenes in water samples. Compared to encountering a single signal when applying the algorithm, the optimization of system parameters for a multicomponent is more complex. In this article, the resolution of adjacent chromatographic peaks is first involved in the optimization of parameters. With the optimized parameters, the algorithm gave an ideal output with good resolution as well as enhanced signal-to-noise ratio. Applying the enhanced signals, the method extended the limit of detection and exhibited good linearity, which ensures accurate determination of the multicomponent.

  5. Hop Optimization and Relay Node Selection in Multi-hop Wireless Ad-Hoc Networks

    NASA Astrophysics Data System (ADS)

    Li, Xiaohua(Edward)

    In this paper we propose an efficient approach to determine the optimal hops for multi-hop ad hoc wireless networks. Based on the assumption that nodes use successive interference cancellation (SIC) and maximal ratio combining (MRC) to deal with mutual interference and to utilize all the received signal energy, we show that the signal-to-interference-plus-noise ratio (SINR) of a node is determined only by the nodes before it, not the nodes after it, along a packet forwarding path. Based on this observation, we propose an iterative procedure to select the relay nodes and to calculate the path SINR as well as capacity of an arbitrary multi-hop packet forwarding path. The complexity of the algorithm is extremely low, and scaling well with network size. The algorithm is applicable in arbitrarily large networks. Its performance is demonstrated as desirable by simulations. The algorithm can be helpful in analyzing the performance of multi-hop wireless networks.

  6. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  7. An Improved Technique for the Photometry and Astrometry of Faint Companions

    NASA Astrophysics Data System (ADS)

    Burke, Daniel; Gladysz, Szymon; Roberts, Lewis; Devaney, Nicholas; Dainty, Chris

    2009-07-01

    We propose a new approach to differential astrometry and photometry of faint companions in adaptive optics images. It is based on a prewhitening matched filter, also referred to in the literature as the Hotelling observer. We focus on cases where the signal of the companion is located within the bright halo of the parent star. Using real adaptive optics data from the 3 m Shane telescope at the Lick Observatory, we compare the performance of the Hotelling algorithm with other estimation algorithms currently used for the same problem. The real single-star data are used to generate artificial binary objects with a range of magnitude ratios. In most cases, the Hotelling observer gives significantly lower astrometric and photometric errors. In the case of high Strehl ratio (SR) data (SR ≈ 0.5), the differential photometry of a binary star with a Δm = 4.5 and a separation of 0.6″ is better than 0.1 mag a factor of 2 lower than the other algorithms considered.

  8. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  9. JONAH algorithms: C-2 the ratio option

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rego, J.

    1979-02-01

    Information concerning input is given first. Then formulas are given for calculation of atoms/millimeter, fissions, kiloton yield, R-value, atoms/fission, fissions/fission, bomb fraction, fissions/atoms, atoms, atoms/atoms, fissions/atoms, atom ratio, total atoms formed, and thermonuclear bomb fraction. Some of the terminology used is elucidated in an appendix. (RWR)

  10. Prospective Elementary Teachers' Misunderstandings in Solving Ratio and Proportion Problems

    ERIC Educational Resources Information Center

    Monteiro, Cecilia

    2003-01-01

    This study explores difficulties that prospective elementary mathematics teachers have with the concepts of ratio and proportion, mainly when they are engaged in solving problems using algorithm procedures. These difficulties can be traced back to earlier experiences when they were students of junior and high school. The reflection on these…

  11. The energy ratio mapping algorithm: a tool to improve the energy-based detection of odontocete echolocation clicks.

    PubMed

    Klinck, Holger; Mellinger, David K

    2011-04-01

    The energy ratio mapping algorithm (ERMA) was developed to improve the performance of energy-based detection of odontocete echolocation clicks, especially for application in environments with limited computational power and energy such as acoustic gliders. ERMA systematically evaluates many frequency bands for energy ratio-based detection of echolocation clicks produced by a target species in the presence of the species mix in a given geographic area. To evaluate the performance of ERMA, a Teager-Kaiser energy operator was applied to the series of energy ratios as derived by ERMA. A noise-adaptive threshold was then applied to the Teager-Kaiser function to identify clicks in data sets. The method was tested for detecting clicks of Blainville's beaked whales while rejecting echolocation clicks of Risso's dolphins and pilot whales. Results showed that the ERMA-based detector correctly identified 81.6% of the beaked whale clicks in an extended evaluation data set. Average false-positive detection rate was 6.3% (3.4% for Risso's dolphins and 2.9% for pilot whales).

  12. Selection Algorithm for the CALIPSO Lidar Aerosol Extinction-to-Backscatter Ratio

    NASA Technical Reports Server (NTRS)

    Omar, Ali H.; Winker, David M.; Vaughan, Mark A.

    2006-01-01

    The extinction-to-backscatter ratio (S(sub a)) is an important parameter used in the determination of the aerosol extinction and subsequently the optical depth from lidar backscatter measurements. We outline the algorithm used to determine Sa for the Cloud and Aerosol Lidar and Infrared Pathfinder Spaceborne Observations (CALIPSO) lidar. S(sub a) for the CALIPSO lidar will either be selected from a look-up table or calculated using the lidar measurements depending on the characteristics of aerosol layer. Whenever suitable lofted layers are encountered, S(sub a) is computed directly from the integrated backscatter and transmittance. In all other cases, the CALIPSO observables: the depolarization ratio, delta, the layer integrated attenuated backscatter, beta, and the mean layer total attenuated color ratio, gamma, together with the surface type, are used to aid in aerosol typing. Once the type is identified, a look-up-table developed primarily from worldwide observations, is used to determine the S(sub a) value. The CALIPSO aerosol models include desert dust, biomass burning, background, polluted continental, polluted dust, and marine aerosols.

  13. Digitally Controlled Slot Coupled Patch Array

    NASA Technical Reports Server (NTRS)

    D'Arista, Thomas; Pauly, Jerry

    2010-01-01

    A four-element array conformed to a singly curved conducting surface has been demonstrated to provide 2 dB axial ratio of 14 percent, while maintaining VSWR (voltage standing wave ratio) of 2:1 and gain of 13 dBiC. The array is digitally controlled and can be scanned with the LMS Adaptive Algorithm using the power spectrum as the objective, as well as the Direction of Arrival (DoA) of the beam to set the amplitude of the power spectrum. The total height of the array above the conducting surface is 1.5 inches (3.8 cm). A uniquely configured microstrip-coupled aperture over a conducting surface produced supergain characteristics, achieving 12.5 dBiC across the 2-to-2.13- GHz and 2.2-to-2.3-GHz frequency bands. This design is optimized to retain VSWR and axial ratio across the band as well. The four elements are uniquely configured with respect to one another for performance enhancement, and the appropriate phase excitation to each element for scan can be found either by analytical beam synthesis using the genetic algorithm with the measured or simulated far field radiation pattern, or an adaptive algorithm implemented with the digitized signal. The commercially available tuners and field-programmable gate array (FPGA) boards utilized required precise phase coherent configuration control, and with custom code developed by Nokomis, Inc., were shown to be fully functional in a two-channel configuration controlled by FPGA boards. A four-channel tuner configuration and oscilloscope configuration were also demonstrated although algorithm post-processing was required.

  14. Automatic estimation of heart boundaries and cardiothoracic ratio from chest x-ray images

    NASA Astrophysics Data System (ADS)

    Dallal, Ahmed H.; Agarwal, Chirag; Arbabshirani, Mohammad R.; Patel, Aalpen; Moore, Gregory

    2017-03-01

    Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.

  15. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †

    PubMed Central

    Kiku, Daisuke; Okutomi, Masatoshi

    2017-01-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407

  16. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    PubMed

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  17. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  18. Visual saliency-based fast intracoding algorithm for high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Shi, Guangming; Zhou, Wei; Duan, Zhemin

    2017-01-01

    Intraprediction has been significantly improved in high efficiency video coding over H.264/AVC with quad-tree-based coding unit (CU) structure from size 64×64 to 8×8 and more prediction modes. However, these techniques cause a dramatic increase in computational complexity. An intracoding algorithm is proposed that consists of perceptual fast CU size decision algorithm and fast intraprediction mode decision algorithm. First, based on the visual saliency detection, an adaptive and fast CU size decision method is proposed to alleviate intraencoding complexity. Furthermore, a fast intraprediction mode decision algorithm with step halving rough mode decision method and early modes pruning algorithm is presented to selectively check the potential modes and effectively reduce the complexity of computation. Experimental results show that our proposed fast method reduces the computational complexity of the current HM to about 57% in encoding time with only 0.37% increases in BD rate. Meanwhile, the proposed fast algorithm has reasonable peak signal-to-noise ratio losses and nearly the same subjective perceptual quality.

  19. Selection of floating-point or fixed-point for adaptive noise canceller in somatosensory evoked potential measurement.

    PubMed

    Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong

    2007-01-01

    Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.

  20. A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics

    DTIC Science & Technology

    2007-05-01

    findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each

  1. An automatic system to study sperm motility and energetics

    PubMed Central

    Nascimento, Jaclyn M.; Chandsawangbhuwana, Charlie; Botvinick, Elliot L.; Berns, Michael W.

    2012-01-01

    An integrated robotic laser and microscope system has been developed to automatically analyze individual sperm motility and energetics. The custom-designed optical system directs near-infrared laser light into an inverted microscope to create a single-point 3-D gradient laser trap at the focal spot of the microscope objective. A two-level computer structure is described that quantifies the sperm motility (in terms of swimming speed and swimming force) and energetics (measuring mid-piece membrane potential) using real-time tracking (done by the upper-level system) and fluorescent ratio imaging (done by the lower-level system). The communication between these two systems is achieved by a gigabit network. The custom-built image processing algorithm identifies the sperm swimming trajectory in real-time using phase contrast images, and then subsequently traps the sperm by automatically moving the microscope stage to relocate the sperm to the laser trap focal plane. Once the sperm is stably trapped (determined by the algorithm), the algorithm can also gradually reduce the laser power by rotating the polarizer in the laser path to measure the trapping power at which the sperm is capable of escaping the trap. To monitor the membrane potential of the mitochondria located in a sperm’s mid-piece, the sperm is treated with a ratiometrically-encoded fluorescent probe. The proposed algorithm can relocate the sperm to the center of the ratio imaging camera and the average ratio value can be measured in real-time. The three parameters, sperm escape power, sperm swimming speed and ratio values of the mid-piece membrane potential of individual sperm can be compared with respect to time. This two-level automatic system to study individual sperm motility and energetics has not only increased experimental throughput by an order of magnitude but also has allowed us to monitor sperm energetics prior to and after exposure to the laser trap. This system should have application in both the human fertility clinic and in animal husbandry. PMID:18299996

  2. An automatic system to study sperm motility and energetics.

    PubMed

    Shi, Linda Z; Nascimento, Jaclyn M; Chandsawangbhuwana, Charlie; Botvinick, Elliot L; Berns, Michael W

    2008-08-01

    An integrated robotic laser and microscope system has been developed to automatically analyze individual sperm motility and energetics. The custom-designed optical system directs near-infrared laser light into an inverted microscope to create a single-point 3-D gradient laser trap at the focal spot of the microscope objective. A two-level computer structure is described that quantifies the sperm motility (in terms of swimming speed and swimming force) and energetics (measuring mid-piece membrane potential) using real-time tracking (done by the upper-level system) and fluorescent ratio imaging (done by the lower-level system). The communication between these two systems is achieved by a gigabit network. The custom-built image processing algorithm identifies the sperm swimming trajectory in real-time using phase contrast images, and then subsequently traps the sperm by automatically moving the microscope stage to relocate the sperm to the laser trap focal plane. Once the sperm is stably trapped (determined by the algorithm), the algorithm can also gradually reduce the laser power by rotating the polarizer in the laser path to measure the trapping power at which the sperm is capable of escaping the trap. To monitor the membrane potential of the mitochondria located in a sperm's mid-piece, the sperm is treated with a ratiometrically-encoded fluorescent probe. The proposed algorithm can relocate the sperm to the center of the ratio imaging camera and the average ratio value can be measured in real-time. The three parameters, sperm escape power, sperm swimming speed and ratio values of the mid-piece membrane potential of individual sperm can be compared with respect to time. This two-level automatic system to study individual sperm motility and energetics has not only increased experimental throughput by an order of magnitude but also has allowed us to monitor sperm energetics prior to and after exposure to the laser trap. This system should have application in both the human fertility clinic and in animal husbandry.

  3. A blood-based screening tool for Alzheimer's disease that spans serum and plasma: findings from TARC and ADNI.

    PubMed

    O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Huebinger, Ryan; Wilhelmsen, Kirk; Edwards, Melissa; Graff-Radford, Neill; Doody, Rachelle; Diaz-Arrastia, Ramon

    2011-01-01

    There is no rapid and cost effective tool that can be implemented as a front-line screening tool for Alzheimer's disease (AD) at the population level. To generate and cross-validate a blood-based screener for AD that yields acceptable accuracy across both serum and plasma. Analysis of serum biomarker proteins were conducted on 197 Alzheimer's disease (AD) participants and 199 control participants from the Texas Alzheimer's Research Consortium (TARC) with further analysis conducted on plasma proteins from 112 AD and 52 control participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The full algorithm was derived from a biomarker risk score, clinical lab (glucose, triglycerides, total cholesterol, homocysteine), and demographic (age, gender, education, APOE*E4 status) data. Alzheimer's disease. 11 proteins met our criteria and were utilized for the biomarker risk score. The random forest (RF) biomarker risk score from the TARC serum samples (training set) yielded adequate accuracy in the ADNI plasma sample (training set) (AUC = 0.70, sensitivity (SN) = 0.54 and specificity (SP) = 0.78), which was below that obtained from ADNI cerebral spinal fluid (CSF) analyses (t-tau/Aβ ratio AUC = 0.92). However, the full algorithm yielded excellent accuracy (AUC = 0.88, SN = 0.75, and SP = 0.91). The likelihood ratio of having AD based on a positive test finding (LR+) = 7.03 (SE = 1.17; 95% CI = 4.49-14.47), the likelihood ratio of not having AD based on the algorithm (LR-) = 3.55 (SE = 1.15; 2.22-5.71), and the odds ratio of AD were calculated in the ADNI cohort (OR) = 28.70 (1.55; 95% CI = 11.86-69.47). It is possible to create a blood-based screening algorithm that works across both serum and plasma that provides a comparable screening accuracy to that obtained from CSF analyses.

  4. Detection of Heart Sounds in Children with and without Pulmonary Arterial Hypertension―Daubechies Wavelets Approach

    PubMed Central

    Elgendi, Mohamed; Kumar, Shine; Guo, Long; Rutledge, Jennifer; Coe, James Y.; Zemp, Roger; Schuurmans, Dale; Adatia, Ian

    2015-01-01

    Background Automatic detection of the 1st (S1) and 2nd (S2) heart sounds is difficult, and existing algorithms are imprecise. We sought to develop a wavelet-based algorithm for the detection of S1 and S2 in children with and without pulmonary arterial hypertension (PAH). Method Heart sounds were recorded at the second left intercostal space and the cardiac apex with a digital stethoscope simultaneously with pulmonary arterial pressure (PAP). We developed a Daubechies wavelet algorithm for the automatic detection of S1 and S2 using the wavelet coefficient ‘D 6’ based on power spectral analysis. We compared our algorithm with four other Daubechies wavelet-based algorithms published by Liang, Kumar, Wang, and Zhong. We annotated S1 and S2 from an audiovisual examination of the phonocardiographic tracing by two trained cardiologists and the observation that in all subjects systole was shorter than diastole. Results We studied 22 subjects (9 males and 13 females, median age 6 years, range 0.25–19). Eleven subjects had a mean PAP < 25 mmHg. Eleven subjects had PAH with a mean PAP ≥ 25 mmHg. All subjects had a pulmonary artery wedge pressure ≤ 15 mmHg. The sensitivity (SE) and positive predictivity (+P) of our algorithm were 70% and 68%, respectively. In comparison, the SE and +P of Liang were 59% and 42%, Kumar 19% and 12%, Wang 50% and 45%, and Zhong 43% and 53%, respectively. Our algorithm demonstrated robustness and outperformed the other methods up to a signal-to-noise ratio (SNR) of 10 dB. For all algorithms, detection errors arose from low-amplitude peaks, fast heart rates, low signal-to-noise ratio, and fixed thresholds. Conclusion Our algorithm for the detection of S1 and S2 improves the performance of existing Daubechies-based algorithms and justifies the use of the wavelet coefficient ‘D 6’ through power spectral analysis. Also, the robustness despite ambient noise may improve real world clinical performance. PMID:26629704

  5. Double-Stage Delay Multiply and Sum Beamforming Algorithm: Application to Linear-Array Photoacoustic Imaging.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-01-01

    Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely delay-multiply-and-sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging, was introduced to overcome the challenges in DAS. DMAS was used in PAI systems and it was shown that this algorithm results in resolution improvement and sidelobe degrading. However, DMAS is still sensitive to high levels of noise, and resolution improvement is not satisfying. Here, we propose a novel algorithm based on DAS algebra inside DMAS formula expansion, double stage DMAS (DS-DMAS), which improves the image resolution and levels of sidelobe, and is much less sensitive to high level of noise compared to DMAS. The performance of DS-DMAS algorithm is evaluated numerically and experimentally. The resulted images are evaluated qualitatively and quantitatively using established quality metrics including signal-to-noise ratio (SNR), full-width-half-maximum (FWHM) and contrast ratio (CR). It is shown that DS-DMAS outperforms DAS and DMAS at the expense of higher computational load. DS-DMAS reduces the lateral valley for about 15 dB and improves the SNR and FWHM better than 13% and 30%, respectively. Moreover, the levels of sidelobe are reduced for about 10 dB in comparison with those in DMAS.

  6. Estimating the ratios of the stationary distribution values for Markov chains modeling evolutionary algorithms.

    PubMed

    Mitavskiy, Boris; Cannings, Chris

    2009-01-01

    The evolutionary algorithm stochastic process is well-known to be Markovian. These have been under investigation in much of the theoretical evolutionary computing research. When the mutation rate is positive, the Markov chain modeling of an evolutionary algorithm is irreducible and, therefore, has a unique stationary distribution. Rather little is known about the stationary distribution. In fact, the only quantitative facts established so far tell us that the stationary distributions of Markov chains modeling evolutionary algorithms concentrate on uniform populations (i.e., those populations consisting of a repeated copy of the same individual). At the same time, knowing the stationary distribution may provide some information about the expected time it takes for the algorithm to reach a certain solution, assessment of the biases due to recombination and selection, and is of importance in population genetics to assess what is called a "genetic load" (see the introduction for more details). In the recent joint works of the first author, some bounds have been established on the rates at which the stationary distribution concentrates on the uniform populations. The primary tool used in these papers is the "quotient construction" method. It turns out that the quotient construction method can be exploited to derive much more informative bounds on ratios of the stationary distribution values of various subsets of the state space. In fact, some of the bounds obtained in the current work are expressed in terms of the parameters involved in all the three main stages of an evolutionary algorithm: namely, selection, recombination, and mutation.

  7. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models

    PubMed Central

    Balosso, Jacques

    2017-01-01

    Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995

  8. Do maize models capture the impacts of heat and drought stresses on yield? Using algorithm ensembles to identify successful approaches.

    PubMed

    Jin, Zhenong; Zhuang, Qianlai; Tan, Zeli; Dukes, Jeffrey S; Zheng, Bangyou; Melillo, Jerry M

    2016-09-01

    Stresses from heat and drought are expected to increasingly suppress crop yields, but the degree to which current models can represent these effects is uncertain. Here we evaluate the algorithms that determine impacts of heat and drought stress on maize in 16 major maize models by incorporating these algorithms into a standard model, the Agricultural Production Systems sIMulator (APSIM), and running an ensemble of simulations. Although both daily mean temperature and daylight temperature are common choice of forcing heat stress algorithms, current parameterizations in most models favor the use of daylight temperature even though the algorithm was designed for daily mean temperature. Different drought algorithms (i.e., a function of soil water content, of soil water supply to demand ratio, and of actual to potential transpiration ratio) simulated considerably different patterns of water shortage over the growing season, but nonetheless predicted similar decreases in annual yield. Using the selected combination of algorithms, our simulations show that maize yield reduction was more sensitive to drought stress than to heat stress for the US Midwest since the 1980s, and this pattern will continue under future scenarios; the influence of excessive heat will become increasingly prominent by the late 21st century. Our review of algorithms in 16 crop models suggests that the impacts of heat and drought stress on plant yield can be best described by crop models that: (i) incorporate event-based descriptions of heat and drought stress, (ii) consider the effects of nighttime warming, and (iii) coordinate the interactions among multiple stresses. Our study identifies the proficiency with which different model formulations capture the impacts of heat and drought stress on maize biomass and yield production. The framework presented here can be applied to other modeled processes and used to improve yield predictions of other crops with a wide variety of crop models. © 2016 John Wiley & Sons Ltd.

  9. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  10. Supercomputing resources empowering superstack with interactive and integrated systems

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2012-09-01

    This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.

  11. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Walsh, Jonathan A.

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  12. An improved target velocity sampling algorithm for free gas elastic scattering

    DOE PAGES

    Romano, Paul K.; Walsh, Jonathan A.

    2018-02-03

    We present an improved algorithm for sampling the target velocity when simulating elastic scattering in a Monte Carlo neutron transport code that correctly accounts for the energy dependence of the scattering cross section. The algorithm samples the relative velocity directly, thereby avoiding a potentially inefficient rejection step based on the ratio of cross sections. Here, we have shown that this algorithm requires only one rejection step, whereas other methods of similar accuracy require two rejection steps. The method was verified against stochastic and deterministic reference results for upscattering percentages in 238U. Simulations of a light water reactor pin cell problemmore » demonstrate that using this algorithm results in a 3% or less penalty in performance when compared with an approximate method that is used in most production Monte Carlo codes« less

  13. Indexed triangle strips optimization for real-time visualization using genetic algorithm: preliminary study

    NASA Astrophysics Data System (ADS)

    Tanaka, Kiyoshi; Takano, Shuichi; Sugimura, Tatsuo

    2000-10-01

    In this work we focus on the indexed triangle strips that is an extended representation of triangle strips to improve the efficiency for geometrical transformation of vertices, and present a method to construct optimum indexed triangle strips using Genetic Algorithm (GA) for real-time visualization. The main objective of this work is how to optimally construct indexed triangle strips by improving the ratio that reuses the data stored in the cash memory and simultaneously reducing the total index numbers with GA. Simulation results verify that the average index numbers and cache miss ratio per polygon cold be small, and consequently the total visualization time required for the optimum solution obtained by this scheme could be remarkably reduced.

  14. Spectrum Orbit Utilization Program documentation: SOUP5 version 3.8 user's manual, volume 1, chapters 1 through 5

    NASA Technical Reports Server (NTRS)

    Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.

    1985-01-01

    The underlying engineering and mathematical models as well as the computational methods used by the Spectrum Orbit Utilization Program 5 (SOUP5) analysis programs are described. Included are the algorithms used to calculate the technical parameters, and references to the technical literature. The organization, capabilities, processing sequences, and processing and data options of the SOUP5 system are described. The details of the geometric calculations are given. Also discussed are the various antenna gain algorithms; rain attenuation and depolarization calculations; calculations of transmitter power and received power flux density; channelization options, interference categories, and protection ratio calculation; generation of aggregrate interference and margins; equivalent gain calculations; and how to enter a protection ratio template.

  15. Research and Development of Automated Eddy Current Testing for Composite Overwrapped Pressure Vessels

    NASA Technical Reports Server (NTRS)

    Carver, Kyle L.; Saulsberry, Regor L.; Nichols, Charles T.; Spencer, Paul R.; Lucero, Ralph E.

    2012-01-01

    Eddy current testing (ET) was used to scan bare metallic liners used in the fabrication of composite overwrapped pressure vessels (COPVs) for flaws which could result in premature failure of the vessel. The main goal of the project was to make improvements in the areas of scan signal to noise ratio, sensitivity of flaw detection, and estimation of flaw dimensions. Scan settings were optimized resulting in an increased signal to noise ratio. Previously undiscovered flaw indications were observed and investigated. Threshold criteria were determined for the system software's flaw report and estimation of flaw dimensions were brought to an acceptable level of accuracy. Computer algorithms were written to import data for filtering and a numerical derivative filtering algorithm was evaluated.

  16. A gradient based algorithm to solve inverse plane bimodular problems of identification

    NASA Astrophysics Data System (ADS)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  17. Computer program for fast Karhunen Loeve transform algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A. K.

    1976-01-01

    The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.

  18. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  19. Comparison of Event Detection Methods for Centralized Sensor Networks

    NASA Technical Reports Server (NTRS)

    Sauvageon, Julien; Agogiono, Alice M.; Farhang, Ali; Tumer, Irem Y.

    2006-01-01

    The development of an Integrated Vehicle Health Management (IVHM) for space vehicles has become a great concern. Smart Sensor Networks is one of the promising technologies that are catching a lot of attention. In this paper, we propose to a qualitative comparison of several local event (hot spot) detection algorithms in centralized redundant sensor networks. The algorithms are compared regarding their ability to locate and evaluate the event under noise and sensor failures. The purpose of this study is to check if the ratio performance/computational power of the Mote Fuzzy Validation and Fusion algorithm is relevant compare to simpler methods.

  20. Optimal Methods for Classification of Digitally Modulated Signals

    DTIC Science & Technology

    2013-03-01

    of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test

  1. An Algorithm for Converting Ordinal Scale Measurement Data to Interval/Ratio Scale

    ERIC Educational Resources Information Center

    Granberg-Rademacker, J. Scott

    2010-01-01

    The extensive use of survey instruments in the social sciences has long created debate and concern about validity of outcomes, especially among instruments that gather ordinal-level data. Ordinal-level survey measurement of concepts that could be measured at the interval or ratio level produce errors because respondents are forced to truncate or…

  2. An algorithm on simultaneous optimization of performance and mass parameters of open-cycle liquid-propellant engine of launch vehicles

    NASA Astrophysics Data System (ADS)

    Eskandari, M. A.; Mazraeshahi, H. K.; Ramesh, D.; Montazer, E.; Salami, E.; Romli, F. I.

    2017-12-01

    In this paper, a new method for the determination of optimum parameters of open-cycle liquid-propellant engine of launch vehicles is introduced. The parameters affecting the objective function, which is the ratio of specific impulse to gross mass of the launch vehicle, are chosen to achieve maximum specific impulse as well as minimum mass for the structure of engine, tanks, etc. The proposed algorithm uses constant integration of thrust with respect to time for launch vehicle with specific diameter and length to calculate the optimum working condition. The results by this novel algorithm are compared to those obtained from using Genetic Algorithm method and they are also validated against the results of existing launch vehicle.

  3. Comparative intelligibility investigation of single-channel noise-reduction algorithms for Chinese, Japanese, and English.

    PubMed

    Li, Junfeng; Yang, Lin; Zhang, Jianping; Yan, Yonghong; Hu, Yi; Akagi, Masato; Loizou, Philipos C

    2011-05-01

    A large number of single-channel noise-reduction algorithms have been proposed based largely on mathematical principles. Most of these algorithms, however, have been evaluated with English speech. Given the different perceptual cues used by native listeners of different languages including tonal languages, it is of interest to examine whether there are any language effects when the same noise-reduction algorithm is used to process noisy speech in different languages. A comparative evaluation and investigation is taken in this study of various single-channel noise-reduction algorithms applied to noisy speech taken from three languages: Chinese, Japanese, and English. Clean speech signals (Chinese words and Japanese words) were first corrupted by three types of noise at two signal-to-noise ratios and then processed by five single-channel noise-reduction algorithms. The processed signals were finally presented to normal-hearing listeners for recognition. Intelligibility evaluation showed that the majority of noise-reduction algorithms did not improve speech intelligibility. Consistent with a previous study with the English language, the Wiener filtering algorithm produced small, but statistically significant, improvements in intelligibility for car and white noise conditions. Significant differences between the performances of noise-reduction algorithms across the three languages were observed.

  4. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    PubMed

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  5. Recurrent procedure for constructing nonisotropic matrix elements of the collision integral of the nonlinear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Ender, I. A.; Bakaleinikov, L. A.; Flegontova, E. Yu.; Gerasimenko, A. B.

    2017-08-01

    We have proposed an algorithm for the sequential construction of nonisotropic matrix elements of the collision integral, which are required to solve the nonlinear Boltzmann equation using the moments method. The starting elements of the matrix are isotropic and assumed to be known. The algorithm can be used for an arbitrary law of interactions for any ratio of the masses of colliding particles.

  6. Searching for discrimination rules in protease proteolytic cleavage activity using genetic programming with a min-max scoring function.

    PubMed

    Yang, Zheng Rong; Thomson, Rebecca; Hodgman, T Charles; Dry, Jonathan; Doyle, Austin K; Narayanan, Ajit; Wu, XiKun

    2003-11-01

    This paper presents an algorithm which is able to extract discriminant rules from oligopeptides for protease proteolytic cleavage activity prediction. The algorithm is developed using genetic programming. Three important components in the algorithm are a min-max scoring function, the reverse Polish notation (RPN) and the use of minimum description length. The min-max scoring function is developed using amino acid similarity matrices for measuring the similarity between an oligopeptide and a rule, which is a complex algebraic equation of amino acids rather than a simple pattern sequence. The Fisher ratio is then calculated on the scoring values using the class label associated with the oligopeptides. The discriminant ability of each rule can therefore be evaluated. The use of RPN makes the evolutionary operations simpler and therefore reduces the computational cost. To prevent overfitting, the concept of minimum description length is used to penalize over-complicated rules. A fitness function is therefore composed of the Fisher ratio and the use of minimum description length for an efficient evolutionary process. In the application to four protease datasets (Trypsin, Factor Xa, Hepatitis C Virus and HIV protease cleavage site prediction), our algorithm is superior to C5, a conventional method for deriving decision trees.

  7. Carrier-to-noise power estimation for the Block 5 Receiver

    NASA Technical Reports Server (NTRS)

    Monk, A. M.

    1991-01-01

    Two possible algorithms for the carrier to noise power (P sub c/N sub 0) estimation in the Block V Receiver are analyzed and their performances compared. The expected value and the variance of each estimator algorithm are derived. The two algorithms examined are known as the I arm estimator, which relies on samples from only the in-phase arm of the digital phase lock loop, and the IQ arm estimator, which uses both in-phase and quadrature-phase arm signals. The IQ arm algorithm is currently implemented in the Advanced Receiver II (ARX II). Both estimators are biased. The performance degradation due to phase jitter in the carrier tracking loop is taken into account. Curves of the expected value and the signal to noise ratio of the P sub c/N sub 0 estimators vs. actual P sub c/N sub 0 are shown. From this, it is clear that the I arm estimator performs better than the IQ arm estimator when the data to noise power ratio (P sub d/N sub 0) is high, i.e., at high P sub c/N sub 0 values and a significant modulation index. When P sub d/N sub 0 is low, the two estimators have essentially the same performance.

  8. Inter-laboratory verification of European pharmacopoeia monograph on derivative spectrophotometry method and its application for chitosan hydrochloride.

    PubMed

    Marković, Bojan; Ignjatović, Janko; Vujadinović, Mirjana; Savić, Vedrana; Vladimirov, Sote; Karljiković-Rajić, Katarina

    2015-01-01

    Inter-laboratory verification of European pharmacopoeia (EP) monograph on derivative spectrophotometry (DS) method and its application for chitosan hydrochloride was carried out on two generation of instruments (earlier GBC Cintra 20 and current technology TS Evolution 300). Instruments operate with different versions of Savitzky-Golay algorithm and modes of generating digital derivative spectra. For resolution power parameter, defined as the amplitude ratio A/B in DS method EP monograph, comparable results were obtained only with algorithm's parameters smoothing points (SP) 7 and the 2nd degree polynomial and those provided corresponding data with other two modes on TS Evolution 300 Medium digital indirect and Medium digital direct. Using quoted algorithm's parameters, the differences in percentages between the amplitude ratio A/B averages, were within accepted criteria (±3%) for assay of drug product for method transfer. The deviation of 1.76% for the degree of deacetylation assessment of chitosan hydrochloride, determined on two instruments, (amplitude (1)D202; the 2nd degree polynomial and SP 9 in Savitzky-Golay algorithm), was acceptable, since it was within allowed criteria (±2%) for assay deviation of drug substance, for method transfer in pharmaceutical analyses. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  10. Improvement in thin cirrus retrievals using an emissivity-adjusted CO2 slicing algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Menzel, W. Paul

    2002-09-01

    CO2 slicing has been generally accepted as a useful algorithm for determining cloud top pressure (CTP) and effective cloud amount (ECA) for tropospheric clouds above 600 hPa. To date, the technique has assumed that the surface emissivity is that of a blackbody in the long-wavelength infrared radiances and that the cloud emissivities in spectrally close bands are approximately equal. The modified CO2 slicing algorithm considers adjustments of both surface emissivity and cloud emissivity ratio. Surface emissivity is adjusted according to the surface types. The ratio of cloud emissivities in spectrally close bands is adjusted away from unity according to radiative transfer calculations. The new CO2 slicing algorithm is examined with Moderate Resolution Imaging Spectroradiometer (MODIS) Airborne Simulator (MAS) CO2 band radiance measurements over thin clouds and validated against Cloud Lidar System (CLS) measurements of the same clouds; it is also applied to Geostationary Operational Environmental Satellite (GOES) Sounder data to study the overall impact on cloud property determinations. For high thin clouds an improved product emerges, while for thick and opaque clouds there is little change. For very thin clouds, the CTP increases by about 10-20 hPa and RMS (root mean square bias) difference is approximately 50 hPa; for thin clouds, the CTP increase is about 10 hPa bias and RMS difference is approximately 30 hPa. The new CO2 slicing algorithm places the clouds lower in the troposphere.

  11. Mitigation of crosstalk based on CSO-ICA in free space orbital angular momentum multiplexing systems

    NASA Astrophysics Data System (ADS)

    Xing, Dengke; Liu, Jianfei; Zeng, Xiangye; Lu, Jia; Yi, Ziyao

    2018-09-01

    Orbital angular momentum (OAM) multiplexing has caused a lot of concerns and researches in recent years because of its great spectral efficiency and many OAM systems in free space channel have been demonstrated. However, due to the existence of atmospheric turbulence, the power of OAM beams will diffuse to beams with neighboring topological charges and inter-mode crosstalk will emerge in these systems, resulting in the system nonavailability in severe cases. In this paper, we introduced independent component analysis (ICA), which is known as a popular method of signal separation, to mitigate inter-mode crosstalk effects; furthermore, aiming at the shortcomings of traditional ICA algorithm's fixed iteration speed, we proposed a joint algorithm, CSO-ICA, to improve the process of solving the separation matrix by taking advantage of fast convergence rate and high convergence precision of chicken swarm algorithm (CSO). We can get the optimal separation matrix by adjusting the step size according to the last iteration in CSO-ICA. Simulation results indicate that the proposed algorithm has a good performance in inter-mode crosstalk mitigation and the optical signal-to-noise ratio (OSNR) requirement of received signals (OAM+2, OAM+4, OAM+6, OAM+8) is reduced about 3.2 dB at bit error ratio (BER) of 3.8 × 10-3. Meanwhile, the convergence speed is much faster than the traditional ICA algorithm by improving about an order of iteration times.

  12. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  13. Influence of radiation dose and iterative reconstruction algorithms for measurement accuracy and reproducibility of pulmonary nodule volumetry: A phantom study.

    PubMed

    Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo

    2014-05-01

    To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Online hyperspectral imaging system for evaluating quality of agricultural products

    NASA Astrophysics Data System (ADS)

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk

    2017-06-01

    The consumption of fresh-cut agricultural produce in Korea has been growing. The browning of fresh-cut vegetables that occurs during storage and foreign substances such as worms and slugs are some of the main causes of consumers' concerns with respect to safety and hygiene. The purpose of this study is to develop an on-line system for evaluating quality of agricultural products using hyperspectral imaging technology. The online evaluation system with single visible-near infrared hyperspectral camera in the range of 400 nm to 1000 nm that can assess quality of both surfaces of agricultural products such as fresh-cut lettuce was designed. Algorithms to detect browning surface were developed for this system. The optimal wavebands for discriminating between browning and sound lettuce as well as between browning lettuce and the conveyor belt were investigated using the correlation analysis and the one-way analysis of variance method. The imaging algorithms to discriminate the browning lettuces were developed using the optimal wavebands. The ratio image (RI) algorithm of the 533 nm and 697 nm images (RI533/697) for abaxial surface lettuce and the ratio image algorithm (RI533/697) and subtraction image (SI) algorithm (SI538-697) for adaxial surface lettuce had the highest classification accuracies. The classification accuracy of browning and sound lettuce was 100.0% and above 96.0%, respectively, for the both surfaces. The overall results show that the online hyperspectral imaging system could potentially be used to assess quality of agricultural products.

  15. The elastic ratio: introducing curvature into ratio-based image segmentation.

    PubMed

    Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel

    2011-09-01

    We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.

  16. Robust QRS detection for HRV estimation from compressively sensed ECG measurements for remote health-monitoring systems.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2018-03-15

    To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.

  17. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  18. Ultrasonic data compression via parameter estimation.

    PubMed

    Cardoso, Guilherme; Saniie, Jafar

    2005-02-01

    Ultrasonic imaging in medical and industrial applications often requires a large amount of data collection. Consequently, it is desirable to use data compression techniques to reduce data and to facilitate the analysis and remote access of ultrasonic information. The precise data representation is paramount to the accurate analysis of the shape, size, and orientation of ultrasonic reflectors, as well as to the determination of the properties of the propagation path. In this study, a successive parameter estimation algorithm based on a modified version of the continuous wavelet transform (CWT) to compress and denoise ultrasonic signals is presented. It has been shown analytically that the CWT (i.e., time x frequency representation) yields an exact solution for the time-of-arrival and a biased solution for the center frequency. Consequently, a modified CWT (MCWT) based on the Gabor-Helstrom transform is introduced as a means to exactly estimate both time-of-arrival and center frequency of ultrasonic echoes. Furthermore, the MCWT also has been used to generate a phase x bandwidth representation of the ultrasonic echo. This representation allows the exact estimation of the phase and the bandwidth. The performance of this algorithm for data compression and signal analysis is studied using simulated and experimental ultrasonic signals. The successive parameter estimation algorithm achieves a data compression ratio of (1-5N/J), where J is the number of samples and N is the number of echoes in the signal. For a signal with 10 echoes and 2048 samples, a compression ratio of 96% is achieved with a signal-to-noise ratio (SNR) improvement above 20 dB. Furthermore, this algorithm performs robustly, yields accurate echo estimation, and results in SNR enhancements ranging from 10 to 60 dB for composite signals having SNR as low as -10 dB.

  19. Stereovision-based pose and inertia estimation of unknown and uncooperative space objects

    NASA Astrophysics Data System (ADS)

    Pesce, Vincenzo; Lavagna, Michèle; Bevilacqua, Riccardo

    2017-01-01

    Autonomous close proximity operations are an arduous and attractive problem in space mission design. In particular, the estimation of pose, motion and inertia properties of an uncooperative object is a challenging task because of the lack of available a priori information. This paper develops a novel method to estimate the relative position, velocity, angular velocity, attitude and the ratios of the components of the inertia matrix of an uncooperative space object using only stereo-vision measurements. The classical Extended Kalman Filter (EKF) and an Iterated Extended Kalman Filter (IEKF) are used and compared for the estimation procedure. In addition, in order to compute the inertia properties, the ratios of the inertia components are added to the state and a pseudo-measurement equation is considered in the observation model. The relative simplicity of the proposed algorithm could be suitable for an online implementation for real applications. The developed algorithm is validated by numerical simulations in MATLAB using different initial conditions and uncertainty levels. The goal of the simulations is to verify the accuracy and robustness of the proposed estimation algorithm. The obtained results show satisfactory convergence of estimation errors for all the considered quantities. The obtained results, in several simulations, shows some improvements with respect to similar works, which deal with the same problem, present in literature. In addition, a video processing procedure is presented to reconstruct the geometrical properties of a body using cameras. This inertia reconstruction algorithm has been experimentally validated at the ADAMUS (ADvanced Autonomous MUltiple Spacecraft) Lab at the University of Florida. In the future, this different method could be integrated to the inertia ratios estimator to have a complete tool for mass properties recognition.

  20. Systematic review of dermoscopy and digital dermoscopy/ artificial intelligence for the diagnosis of melanoma.

    PubMed

    Rajpara, S M; Botello, A P; Townend, J; Ormerod, A D

    2009-09-01

    Dermoscopy improves diagnostic accuracy of the unaided eye for melanoma, and digital dermoscopy with artificial intelligence or computer diagnosis has also been shown useful for the diagnosis of melanoma. At present there is no clear evidence regarding the diagnostic accuracy of dermoscopy compared with artificial intelligence. To evaluate the diagnostic accuracy of dermoscopy and digital dermoscopy/artificial intelligence for melanoma diagnosis and to compare the diagnostic accuracy of the different dermoscopic algorithms with each other and with digital dermoscopy/artificial intelligence for the detection of melanoma. A literature search on dermoscopy and digital dermoscopy/artificial intelligence for melanoma diagnosis was performed using several databases. Titles and abstracts of the retrieved articles were screened using a literature evaluation form. A quality assessment form was developed to assess the quality of the included studies. Heterogeneity among the studies was assessed. Pooled data were analysed using meta-analytical methods and comparisons between different algorithms were performed. Of 765 articles retrieved, 30 studies were eligible for meta-analysis. Pooled sensitivity for artificial intelligence was slightly higher than for dermoscopy (91% vs. 88%; P = 0.076). Pooled specificity for dermoscopy was significantly better than artificial intelligence (86% vs. 79%; P < 0.001). Pooled diagnostic odds ratio was 51.5 for dermoscopy and 57.8 for artificial intelligence, which were not significantly different (P = 0.783). There were no significance differences in diagnostic odds ratio among the different dermoscopic diagnostic algorithms. Dermoscopy and artificial intelligence performed equally well for diagnosis of melanocytic skin lesions. There was no significant difference in the diagnostic performance of various dermoscopy algorithms. The three-point checklist, the seven-point checklist and Menzies score had better diagnostic odds ratios than the others; however, these results need to be confirmed by a large-scale high-quality population-based study.

  1. Respiratory-gated segment reconstruction for radiation treatment planning using 256-slice CT-scanner during free breathing

    NASA Astrophysics Data System (ADS)

    Mori, Shinichiro; Endo, Masahiro; Kohno, Ryosuke; Minohara, Shinichi; Kohno, Kazutoshi; Asakura, Hiroshi; Fujiwara, Hideaki; Murase, Kenya

    2005-04-01

    The conventional respiratory-gated CT scan technique includes anatomic motion induced artifacts due to the low temporal resolution. They are a significant source of error in radiotherapy treatment planning for the thorax and upper abdomen. Temporal resolution and image quality are important factors to minimize planning target volume margin due to the respiratory motion. To achieve high temporal resolution and high signal-to-noise ratio, we developed a respiratory gated segment reconstruction algorithm and adapted it to Feldkamp-Davis-Kress algorithm (FDK) with a 256-detector row CT. The 256-detector row CT could scan approximately 100 mm in the cranio-caudal direction with 0.5 mm slice thickness in one rotation. Data acquisition for the RS-FDK relies on the assistance of the respiratory sensing system by a cine scan mode (table remains stationary). We evaluated RS-FDK in phantom study with the 256-detector row CT and compared it with full scan (FS-FDK) and HS-FDK results with regard to volume accuracy and image noise, and finally adapted the RS-FDK to an animal study. The RS-FDK gave a more accurate volume than the others and it had the same signal-to-noise ratio as the FS-FDK. In the animal study, the RS-FDK visualized the clearest edges of the liver and pulmonary vessels of all the algorithms. In conclusion, the RS-FDK algorithm has a capability of high temporal resolution and high signal-to-noise ratio. Therefore it will be useful when combined with new radiotherapy techniques including image guided radiation therapy (IGRT) and 4D radiation therapy.

  2. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    PubMed

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  3. The potential of LIRIC to validate the vertical profiles of the aerosol mass concentration estimated by an air quality model

    NASA Astrophysics Data System (ADS)

    Siomos, Nikolaos; Filoglou, Maria; Poupkou, Anastasia; Liora, Natalia; Dimopoulos, Spyros; Melas, Dimitris; Chaikovsky, Anatoli; Balis, Dimitris

    2015-04-01

    Vertical profiles of the aerosol mass concentration derived by a retrieval algorithm that uses combined sunphotometer and LIDAR data (LIRIC) were used in order to validate the mass concentration profiles estimated by the air quality model CAMx. LIDAR and CIMEL measurements of the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki were used for this validation.The aerosol mass concentration profiles of the fine and coarse mode derived by CAMx were compared with the respective profiles derived by the retrieval algorithm. For the coarse mode particles, forecasts of the Saharan dust transportation model BSC-DREAM8bV2 were also taken into account. Each of the retrieval algorithm's profiles were matched to the models' profile with the best agreement within a time window of four hours before and after the central measurement. OPAC, a software than can provide optical properties of aerosol mixtures, was also employed in order to calculate the angstrom exponent and the lidar ratio values for 355nm and 532nm for each of the model's profiles aiming in a comparison with the angstrom exponent and the lidar ratio values derived by the retrieval algorithm for each measurement. The comparisons between the fine mode aerosol concentration profiles resulted in a good agreement between CAMx and the retrieval algorithm, with the vertical mean bias error never exceeding 7 μgr/m3. Concerning the aerosol coarse mode concentration profiles both CAMx and BSC-DREAM8bV2 values are severely underestimated, although, in cases of Saharan dust transportation events there is an agreement between the profiles of BSC-DREAM8bV2 model and the retrieval algorithm.

  4. Automated segmentation of white matter fiber bundles using diffusion tensor imaging data and a new density based clustering algorithm.

    PubMed

    Kamali, Tahereh; Stashuk, Daniel

    2016-10-01

    Robust and accurate segmentation of brain white matter (WM) fiber bundles assists in diagnosing and assessing progression or remission of neuropsychiatric diseases such as schizophrenia, autism and depression. Supervised segmentation methods are infeasible in most applications since generating gold standards is too costly. Hence, there is a growing interest in designing unsupervised methods. However, most conventional unsupervised methods require the number of clusters be known in advance which is not possible in most applications. The purpose of this study is to design an unsupervised segmentation algorithm for brain white matter fiber bundles which can automatically segment fiber bundles using intrinsic diffusion tensor imaging data information without considering any prior information or assumption about data distributions. Here, a new density based clustering algorithm called neighborhood distance entropy consistency (NDEC), is proposed which discovers natural clusters within data by simultaneously utilizing both local and global density information. The performance of NDEC is compared with other state of the art clustering algorithms including chameleon, spectral clustering, DBSCAN and k-means using Johns Hopkins University publicly available diffusion tensor imaging data. The performance of NDEC and other employed clustering algorithms were evaluated using dice ratio as an external evaluation criteria and density based clustering validation (DBCV) index as an internal evaluation metric. Across all employed clustering algorithms, NDEC obtained the highest average dice ratio (0.94) and DBCV value (0.71). NDEC can find clusters with arbitrary shapes and densities and consequently can be used for WM fiber bundle segmentation where there is no distinct boundary between various bundles. NDEC may also be used as an effective tool in other pattern recognition and medical diagnostic systems in which discovering natural clusters within data is a necessity. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  6. A comparative intelligibility study of single-microphone noise reduction algorithms.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2007-09-01

    The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.

  7. Seasonal And Regional Differentiation Of Bio-Optical Properties Within The North Polar Atlantic

    NASA Technical Reports Server (NTRS)

    Stramska, Malgorzata; Stramski, Dariusz; Kaczmarek, Slawomir; Allison, David B.; Schwarz, Jill

    2005-01-01

    Using data collected during spring and summer seasons in the north polar Atlantic we examined the variability of the spectral absorption, a(lambda), and backscattering, b(sub b)(lambda), coefficients of surface waters and its relation to phytoplankton pigment concentration and composition. For a given chlorophyll a concentration (TChla), the concentrations of photosynthetic carotenoids (PSC), photoprotective carotenoids (PPC), and total accessory pigments (AP) were consistently lower in spring than in summer. The chlorophyll-specific absorption coefficients of phytoplankton and total particulate matter were also lower in spring, which can be partly attributed to lower proportions of PPC, PSC, and AP in spring. The spring values of the green-to-blue band ratio of the absorption coefficient were higher than the summer ratios. The blue-to-green ratios of backscattering coefficient were also higher in spring. The higher b(sub b) values and lower blue-to-green b(sub b) ratios in summer were likely associated with higher concentrations of detrital particles in summer compared to spring. Because the product of the green-to-blue absorption ratio and the blue-to-green backscattering ratio is a proxy for the blue-to-green ratio of remote-sensing reflectance, we conclude that the performance of ocean color band-ratio algorithms for estimating pigments in the north polar Atlantic is significantly affected by seasonal shifts in the relationships between absorption and TChla as well as between backscattering and TChla. Intriguingly, however, fairly good estimate of the particulate beam attenuation coefficient at 660 nm (potential measure of total particulate matter or particulate organic carbon concentration) can be obtained by applying a single blue-to-green band-ratio algorithm for both spring and summer seasons.

  8. Phasor based single-molecule localization microscopy in 3D (pSMLM-3D): An algorithm for MHz localization rates using standard CPUs

    NASA Astrophysics Data System (ADS)

    Martens, Koen J. A.; Bader, Arjen N.; Baas, Sander; Rieger, Bernd; Hohlbein, Johannes

    2018-03-01

    We present a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available. Our algorithm converts the region of interest around a point spread function to two phase vectors (phasors) by calculating the first Fourier coefficients in both the x- and y-direction. The angles of these phasors are used to localize the center of the single fluorescent emitter, and the ratio of the magnitudes of the two phasors is a measure for astigmatism, which can be used to obtain depth information (z-direction). Our approach can be used both as a stand-alone algorithm for maximizing localization speed and as a first estimator for more time consuming iterative algorithms.

  9. Hue-preserving and saturation-improved color histogram equalization algorithm.

    PubMed

    Song, Ki Sun; Kang, Hee; Kang, Moon Gi

    2016-06-01

    In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.

  10. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  11. Evolutionary Beamforming Optimization for Radio Frequency Charging in Wireless Rechargeable Sensor Networks

    PubMed Central

    Yao, Ke-Han; Jiang, Jehn-Ruey; Tsai, Chung-Hsien; Wu, Zong-Syun

    2017-01-01

    This paper investigates how to efficiently charge sensor nodes in a wireless rechargeable sensor network (WRSN) with radio frequency (RF) chargers to make the network sustainable. An RF charger is assumed to be equipped with a uniform circular array (UCA) of 12 antennas with the radius λ, where λ is the RF wavelength. The UCA can steer most RF energy in a target direction to charge a specific WRSN node by the beamforming technology. Two evolutionary algorithms (EAs) using the evolution strategy (ES), namely the Evolutionary Beamforming Optimization (EBO) algorithm and the Evolutionary Beamforming Optimization Reseeding (EBO-R) algorithm, are proposed to nearly optimize the power ratio of the UCA beamforming peak side lobe (PSL) and the main lobe (ML) aimed at the given target direction. The proposed algorithms are simulated for performance evaluation and are compared with a related algorithm, called Particle Swarm Optimization Gravitational Search Algorithm-Explore (PSOGSA-Explore), to show their superiority. PMID:28825648

  12. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    NASA Astrophysics Data System (ADS)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  13. Metal artefact reduction with cone beam CT: an in vitro study

    PubMed Central

    Bechara, BB; Moore, WS; McMahan, CA; Noujeim, M

    2012-01-01

    Background Metal in a patient's mouth has been shown to cause artefacts that can interfere with the diagnostic quality of cone beam CT. Recently, a manufacturer has made an algorithm and software available which reduces metal streak artefact (Picasso Master 3D® machine; Vatech, Hwaseong, Republic of Korea). Objectives The purpose of this investigation was to determine whether or not the metal artefact reduction algorithm was effective and enhanced the contrast-to-noise ratio. Methods A phantom was constructed incorporating three metallic beads and three epoxy resin-based bone substitutes to simulate bone next to metal. The phantom was placed in the centre of the field of view and at the periphery. 10 data sets were acquired at 50–90 kVp. The images obtained were analysed using a public domain software ImageJ (NIH Image, Bethesda, MD). Profile lines were used to evaluate grey level changes and area histograms were used to evaluate contrast. The contrast-to-noise ratio was calculated. Results The metal artefact reduction option reduced grey value variation and increased the contrast-to-noise ratio. The grey value varied least when the phantom was in the middle of the volume and the metal artefact reduction was activated. The image quality improved as the peak kilovoltage increased. Conclusion Better images of a phantom were obtained when the metal artefact reduction algorithm was used. PMID:22241878

  14. An algorithm that improves speech intelligibility in noise for normal-hearing listeners.

    PubMed

    Kim, Gibak; Lu, Yang; Hu, Yi; Loizou, Philipos C

    2009-09-01

    Traditional noise-suppression algorithms have been shown to improve speech quality, but not speech intelligibility. Motivated by prior intelligibility studies of speech synthesized using the ideal binary mask, an algorithm is proposed that decomposes the input signal into time-frequency (T-F) units and makes binary decisions, based on a Bayesian classifier, as to whether each T-F unit is dominated by the target or the masker. Speech corrupted at low signal-to-noise ratio (SNR) levels (-5 and 0 dB) using different types of maskers is synthesized by this algorithm and presented to normal-hearing listeners for identification. Results indicated substantial improvements in intelligibility (over 60% points in -5 dB babble) over that attained by human listeners with unprocessed stimuli. The findings from this study suggest that algorithms that can estimate reliably the SNR in each T-F unit can improve speech intelligibility.

  15. Spectral matching technology for light-emitting diode-based jaundice photodynamic therapy device

    NASA Astrophysics Data System (ADS)

    Gan, Ru-ting; Guo, Zhen-ning; Lin, Jie-ben

    2015-02-01

    The objective of this paper is to obtain the spectrum of light-emitting diode (LED)-based jaundice photodynamic therapy device (JPTD), the bilirubin absorption spectrum in vivo was regarded as target spectrum. According to the spectral constructing theory, a simple genetic algorithm as the spectral matching algorithm was first proposed in this study. The optimal combination ratios of LEDs were obtained, and the required LEDs number was then calculated. Meanwhile, the algorithm was compared with the existing spectral matching algorithms. The results show that this algorithm runs faster with higher efficiency, the switching time consumed is 2.06 s, and the fitting spectrum is very similar to the target spectrum with 98.15% matching degree. Thus, blue LED-based JPTD can replace traditional blue fluorescent tube, the spectral matching technology that has been put forward can be applied to the light source spectral matching for jaundice photodynamic therapy and other medical phototherapy.

  16. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  17. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  18. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  19. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  20. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  1. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  2. Self-recovery fragile watermarking algorithm based on SPHIT

    NASA Astrophysics Data System (ADS)

    Xin, Li Ping

    2015-12-01

    A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.

  3. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    PubMed

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  4. From bicycle chain ring shape to gear ratio: algorithm and examples.

    PubMed

    van Soest, A J

    2014-01-03

    A simple model of the bicycle drive system with a non-circular front chain ring is proposed and an algorithm is devised for calculation of the corresponding Gear Ratio As a Function Of Crank Angle (GRAFOCA). It is shown that the true effective radius of the chain ring is always the perpendicular distance between the crank axis and the line through the chain segment between the chain ring and the cog. It is illustrated that the true effective radius of the chain ring at any crank angle may differ substantially from the maximum vertical distance between the crank axis and the chain ring circumference that is used as a proxy for the effective chain ring radius in several studies; in particular, the crank angle at which the effective chain ring radius is maximal as predicted from the latter approach may deviate by as much as 0.30 rad from the true value. The algorithm proposed may help in designing chain rings that achieve the desired GRAFOCA. © 2013 Published by Elsevier Ltd. All rights reserved.

  5. Specular reflection treatment for the 3D radiative transfer equation solved with the discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.

    The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less

  6. A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212

  7. Two-Photon Excitation STED Microscopy with Time-Gated Detection

    PubMed Central

    Coto Hernández, Iván; Castello, Marco; Lanzanò, Luca; d’Amora, Marta; Bianchini, Paolo; Diaspro, Alberto; Vicidomini, Giuseppe

    2016-01-01

    We report on a novel two-photon excitation stimulated emission depletion (2PE-STED) microscope based on time-gated detection. The time-gated detection allows for the effective silencing of the fluorophores using moderate stimulated emission beam intensity. This opens the possibility of implementing an efficient 2PE-STED microscope with a stimulated emission beam running in a continuous-wave. The continuous-wave stimulated emission beam tempers the laser architecture’s complexity and cost, but the time-gated detection degrades the signal-to-noise ratio (SNR) and signal-to-background ratio (SBR) of the image. We recover the SNR and the SBR through a multi-image deconvolution algorithm. Indeed, the algorithm simultaneously reassigns early-photons (normally discarded by the time-gated detection) to their original positions and removes the background induced by the stimulated emission beam. We exemplify the benefits of this implementation by imaging sub-cellular structures. Finally, we discuss of the extension of this algorithm to future all-pulsed 2PE-STED implementationd based on time-gated detection and a nanosecond laser source. PMID:26757892

  8. Slower speed and stronger coupling: adaptive mechanisms of chaos synchronization.

    PubMed

    Wang, Xiao Fan

    2002-06-01

    We show that two initially weakly coupled chaotic systems can achieve synchronization by adaptively reducing their speed and/or enhancing the coupling strength. Explicit adaptive algorithms for speed reduction and coupling enhancement are provided. We apply these algorithms to the synchronization of two coupled Lorenz systems. It is found that after a long-time adaptive process, the two coupled chaotic systems can achieve synchronization with almost the minimum required coupling-speed ratio.

  9. Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan

    In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.

  10. Dynamic magnetic resonance imaging method based on golden-ratio cartesian sampling and compressed sensing.

    PubMed

    Li, Shuo; Zhu, Yanchun; Xie, Yaoqin; Gao, Song

    2018-01-01

    Dynamic magnetic resonance imaging (DMRI) is used to noninvasively trace the movements of organs and the process of drug delivery. The results can provide quantitative or semiquantitative pathology-related parameters, thus giving DMRI great potential for clinical applications. However, conventional DMRI techniques suffer from low temporal resolution and long scan time owing to the limitations of the k-space sampling scheme and image reconstruction algorithm. In this paper, we propose a novel DMRI sampling scheme based on a golden-ratio Cartesian trajectory in combination with a compressed sensing reconstruction algorithm. The results of two simulation experiments, designed according to the two major DMRI techniques, showed that the proposed method can improve the temporal resolution and shorten the scan time and provide high-quality reconstructed images.

  11. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  12. A novel decoding algorithm based on the hierarchical reliable strategy for SCG-LDPC codes in optical communications

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong

    2013-11-01

    An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.

  13. Mixed raster content (MRC) model for compound image compression

    NASA Astrophysics Data System (ADS)

    de Queiroz, Ricardo L.; Buckley, Robert R.; Xu, Ming

    1998-12-01

    This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary test and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multi-layered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

  14. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.

    PubMed

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.

  15. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  16. An optimized compression algorithm for real-time ECG data transmission in wireless network of medical information systems.

    PubMed

    Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro

    2015-01-01

    Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.

  17. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  18. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  19. Noise-immune complex correlation for optical coherence angiography based on standard and Jones matrix optical coherence tomography

    PubMed Central

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Miura, Masahiro; Yasuno, Yoshiaki

    2016-01-01

    This paper describes a complex correlation mapping algorithm for optical coherence angiography (cmOCA). The proposed algorithm avoids the signal-to-noise ratio dependence and exhibits low noise in vasculature imaging. The complex correlation coefficient of the signals, rather than that of the measured data are estimated, and two-step averaging is introduced. Algorithms of motion artifact removal based on non perfusing tissue detection using correlation are developed. The algorithms are implemented with Jones-matrix OCT. Simultaneous imaging of pigmented tissue and vasculature is also achieved using degree of polarization uniformity imaging with cmOCA. An application of cmOCA to in vivo posterior human eyes is presented to demonstrate that high-contrast images of patients’ eyes can be obtained. PMID:27446673

  20. A code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check codes

    NASA Astrophysics Data System (ADS)

    Bai, Cheng-lin; Cheng, Zhi-hui

    2016-09-01

    In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.

  1. Multistage classification of multispectral Earth observational data: The design approach

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Muasher, M. J.; Landgrebe, D. A.

    1981-01-01

    An algorithm is proposed which predicts the optimal features at every node in a binary tree procedure. The algorithm estimates the probability of error by approximating the area under the likelihood ratio function for two classes and taking into account the number of training samples used in estimating each of these two classes. Some results on feature selection techniques, particularly in the presence of a very limited set of training samples, are presented. Results comparing probabilities of error predicted by the proposed algorithm as a function of dimensionality as compared to experimental observations are shown for aircraft and LANDSAT data. Results are obtained for both real and simulated data. Finally, two binary tree examples which use the algorithm are presented to illustrate the usefulness of the procedure.

  2. Lossless compression of image data products on th e FIFE CD-ROM series

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Strebel, Donald E.

    1993-01-01

    How do you store enough of the key data sets, from a total of 120 gigabytes of data collected for a scientific experiment, on a collection of CD-ROM's, small enough to distribute to a broad scientific community? In such an application where information loss in unacceptable, lossless compression algorithms are the only choice. Although lossy compression algorithms can provide an order of magnitude improvement in compression ratios over lossless algorithms the information that is lost is often part of the key scientific precision of the data. Therefore, lossless compression algorithms are and will continue to be extremely important in minimizing archiving storage requirements and distribution of large earth and space (ESS) data sets while preserving the essential scientific precision of the data.

  3. Matrix multiplication on the Intel Touchstone Delta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huss-Lederman, S.; Jacobson, E.M.; Tsao, A.

    1993-12-31

    Matrix multiplication is a key primitive in block matrix algorithms such as those found in LAPACK. We present results from our study of matrix multiplication algorithms on the Intel Touchstone Delta, a distributed memory message-passing architecture with a two-dimensional mesh topology. We obtain an implementation that uses communication primitives highly suited to the Delta and exploits the single node assembly-coded matrix multiplication. Our algorithm is completely general, able to deal with arbitrary mesh aspect ratios and matrix dimensions, and has achieved parallel efficiency of 86% with overall peak performance in excess of 8 Gflops on 256 nodes for an 8800more » {times} 8800 matrix. We describe our algorithm design and implementation, and present performance results that demonstrate scalability and robust behavior over varying mesh topologies.« less

  4. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  5. Home Camera-Based Fall Detection System for the Elderly.

    PubMed

    de Miguel, Koldo; Brunete, Alberto; Hernando, Miguel; Gambao, Ernesto

    2017-12-09

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  6. A Technique for Measuring Rotocraft Dynamic Stability in the 40 by 80 Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Bohn, J. G.

    1977-01-01

    An on-line technique is described for the measurement of tilt rotor aircraft dynamic stability in the Ames 40- by 80-Foot Wind Tunnel. The technique is based on advanced system identification methodology and uses the instrumental variables approach. It is particulary applicable to real time estimation problems with limited amounts of noise-contaminated data. Several simulations are used to evaluate the algorithm. Estimated natural frequencies and damping ratios are compared with simulation values. The algorithm is also applied to wind tunnel data in an off-line mode. The results are used to develop preliminary guidelines for effective use of the algorithm.

  7. Home Camera-Based Fall Detection System for the Elderly

    PubMed Central

    de Miguel, Koldo

    2017-01-01

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%. PMID:29232846

  8. Uncertainty quantification and experimental design based on unsupervised machine learning identification of contaminant sources and groundwater types using hydrogeochemical data

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.

    2017-12-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.

  9. Observer detection of image degradation caused by irreversible data compression processes

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  10. Determination of target detection limits in hyperspectral data using band selection and dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.

    2016-10-01

    Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.

  11. Is introducing rapid culture into the diagnostic algorithm of smear-negative tuberculosis cost-effective?

    PubMed

    Yakhelef, N; Audibert, M; Varaine, F; Chakaya, J; Sitienei, J; Huerga, H; Bonnet, M

    2014-05-01

    In 2007, the World Health Organization recommended introducing rapid Mycobacterium tuberculosis culture into the diagnostic algorithm of smear-negative pulmonary tuberculosis (TB). To assess the cost-effectiveness of introducing a rapid non-commercial culture method (thin-layer agar), together with Löwenstein-Jensen culture to diagnose smear-negative TB at a district hospital in Kenya. Outcomes (number of true TB cases treated) were obtained from a prospective study evaluating the effectiveness of a clinical and radiological algorithm (conventional) against the alternative algorithm (conventional plus M. tuberculosis culture) in 380 smear-negative TB suspects. The costs of implementing each algorithm were calculated using a 'micro-costing' or 'ingredient-based' method. We then compared the cost and effectiveness of conventional vs. culture-based algorithms and estimated the incremental cost-effectiveness ratio. The costs of conventional and culture-based algorithms per smear-negative TB suspect were respectively €39.5 and €144. The costs per confirmed and treated TB case were respectively €452 and €913. The culture-based algorithm led to diagnosis and treatment of 27 more cases for an additional cost of €1477 per case. Despite the increase in patients started on treatment thanks to culture, the relatively high cost of a culture-based algorithm will make it difficult for resource-limited countries to afford.

  12. Virtual Network Embedding via Monte Carlo Tree Search.

    PubMed

    Haeri, Soroush; Trajkovic, Ljiljana

    2018-02-01

    Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.

  13. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  14. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  15. A general heuristic for genome rearrangement problems.

    PubMed

    Dias, Ulisses; Galvão, Gustavo Rodrigues; Lintzmayer, Carla Négri; Dias, Zanoni

    2014-06-01

    In this paper, we present a general heuristic for several problems in the genome rearrangement field. Our heuristic does not solve any problem directly, it is rather used to improve the solutions provided by any non-optimal algorithm that solve them. Therefore, we have implemented several algorithms described in the literature and several algorithms developed by ourselves. As a whole, we implemented 23 algorithms for 9 well known problems in the genome rearrangement field. A total of 13 algorithms were implemented for problems that use the notions of prefix and suffix operations. In addition, we worked on 5 algorithms for the classic problem of sorting by transposition and we conclude the experiments by presenting results for 3 approximation algorithms for the sorting by reversals and transpositions problem and 2 approximation algorithms for the sorting by reversals problem. Another algorithm with better approximation ratio can be found for the last genome rearrangement problem, but it is purely theoretical with no practical implementation. The algorithms we implemented in addition to our heuristic lead to the best practical results in each case. In particular, we were able to improve results on the sorting by transpositions problem, which is a very special case because many efforts have been made to generate algorithms with good results in practice and some of these algorithms provide results that equal the optimum solutions in many cases. Our source codes and benchmarks are freely available upon request from the authors so that it will be easier to compare new approaches against our results.

  16. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data.

    PubMed

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.

  17. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  18. Impact of water use efficiency on eddy covariance flux partitioning using correlation structure analysis

    NASA Astrophysics Data System (ADS)

    Anderson, Ray; Skaggs, Todd; Alfieri, Joseph; Kustas, William; Wang, Dong; Ayars, James

    2016-04-01

    Partitioned land surfaces fluxes (e.g. evaporation, transpiration, photosynthesis, and ecosystem respiration) are needed as input, calibration, and validation data for numerous hydrological and land surface models. However, one of the most commonly used techniques for measuring land surface fluxes, Eddy Covariance (EC), can directly measure net, combined water and carbon fluxes (evapotranspiration and net ecosystem exchange/productivity). Analysis of the correlation structure of high frequency EC time series (hereafter flux partitioning or FP) has been proposed to directly partition net EC fluxes into their constituent components using leaf-level water use efficiency (WUE) data to separate stomatal and non-stomatal transport processes. FP has significant logistical and spatial representativeness advantages over other partitioning approaches (e.g. isotopic fluxes, sap flow, microlysimeters), but the performance of the FP algorithm is reliant on the accuracy of the intercellular CO2 (ci) concentration used to parameterize WUE for each flux averaging interval. In this study, we tested several parameterizations for ci as a function of atmospheric CO2 (ca), including (1) a constant ci/ca ratio for C3 and C4 photosynthetic pathway plants, (2) species-specific ci/ca-Vapor Pressure Deficit (VPD) relationships (quadratic and linear), and (3) generalized C3 and C4 photosynthetic pathway ci/ca-VPD relationships. We tested these ci parameterizations at three agricultural EC towers from 2011-present in C4 and C3 crops (sugarcane - Saccharum officinarum L. and peach - Prunus persica), and validated again sap-flow sensors installed at the peach site. The peach results show that the species-specific parameterizations driven FP algorithm came to convergence significantly more frequently (~20% more frequently) than the constant ci/ca ratio or generic C3-VPD relationship. The FP algorithm parameterizations with a generic VPD relationship also had slightly higher transpiration (5 Wm-2 difference) than the constant ci/ca ratio. However, photosynthesis and respiration fluxes over sugarcane were ~15% lower with a VPD-ci/ca relationship than a constant ci/ca ratio. The results illustrate the importance of combining leaf-level physiological observations with EC to improve the performance of the FP algorithm.

  19. Diagnostic Abilities of Variable and Enhanced Corneal Compensation Algorithms of GDx in Different Severities of Glaucoma.

    PubMed

    Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Garudadri, Chandra S; Rao, Harsha L

    2016-02-01

    To compare the abilities of retinal nerve fiber layer (RNFL) parameters of variable corneal compensation (VCC) and enhanced corneal compensation (ECC) algorithms of scanning laser polarimetry (GDx) in detecting various severities of glaucoma. Two hundred and eighty-five eyes of 194 subjects from the Longitudinal Glaucoma Evaluation Study who underwent GDx VCC and ECC imaging were evaluated. Abilities of RNFL parameters of GDx VCC and ECC to diagnose glaucoma were compared using area under receiver operating characteristic curves (AUC), sensitivities at fixed specificities, and likelihood ratios. After excluding 5 eyes that failed to satisfy manufacturer-recommended quality parameters with ECC and 68 with VCC, 56 eyes of 41 normal subjects and 161 eyes of 121 glaucoma patients [36 eyes with preperimetric glaucoma, 52 eyes with early (MD>-6 dB), 34 with moderate (MD between -6 and -12 dB), and 39 with severe glaucoma (MD<-12 dB)] were included for the analysis. Inferior RNFL, average RNFL, and nerve fiber indicator parameters showed the best AUCs and sensitivities both with GDx VCC and ECC in diagnosing all severities of glaucoma. AUCs and sensitivities of all RNFL parameters were comparable between the VCC and ECC algorithms (P>0.20 for all comparisons). Likelihood ratios associated with the diagnostic categorization of RNFL parameters were comparable between the VCC and ECC algorithms. In scans satisfying the manufacturer-recommended quality parameters, which were significantly greater with ECC than VCC algorithm, diagnostic abilities of GDx ECC and VCC in glaucoma were similar.

  20. Simulation of Long Lived Tracers Using an Improved Empirically-Based Two-Dimensional Model Transport Algorithm

    NASA Technical Reports Server (NTRS)

    Fleming, Eric L.; Jackman, Charles H.; Stolarski, Richard S.; Considine, David B.

    1998-01-01

    We have developed a new empirically-based transport algorithm for use in our GSFC two-dimensional transport and chemistry assessment model. The new algorithm contains planetary wave statistics, and parameterizations to account for the effects due to gravity waves and equatorial Kelvin waves. We will present an overview of the new algorithm, and show various model-data comparisons of long-lived tracers as part of the model validation. We will also show how the new algorithm gives substantially better agreement with observations compared to our previous model transport. The new model captures much of the qualitative structure and seasonal variability observed methane, water vapor, and total ozone. These include: isolation of the tropics and winter polar vortex, the well mixed surf-zone region of the winter sub-tropics and mid-latitudes, and the propagation of seasonal signals in the tropical lower stratosphere. Model simulations of carbon-14 and strontium-90 compare fairly well with observations in reproducing the peak in mixing ratio at 20-25 km, and the decrease with altitude in mixing ratio above 25 km. We also ran time dependent simulations of SF6 from which the model mean age of air values were derived. The oldest air (5.5 to 6 years) occurred in the high latitude upper stratosphere during fall and early winter of both hemispheres, and in the southern hemisphere lower stratosphere during late winter and early spring. The latitudinal gradient of the mean ages also compare well with ER-2 aircraft observations in the lower stratosphere.

  1. Effects of different eddy covariance correction schemes on energy balance closure and comparisons with the modified Bowen ratio system

    Treesearch

    Adam Wolf; Nick Saliendra; Kanat Akshalov; Douglas A. Johnson; Emilio Laca

    2008-01-01

    Eddy covariance (EC) and modified Bowen ratio (MBR) systems have been shown to yield subtly different estimates of sensible heat (H), latent heat (LE), and CO2 fluxes (Fc). Our study analyzed the discrepancies between these two systems by first considering the role of the data processing algorithm used to estimate fluxes using EC and later...

  2. Distributed Immune Systems for Wireless Network Information Assurance

    DTIC Science & Technology

    2010-04-26

    ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability

  3. Impact of Surface Roughness on AMSR-E Sea Ice Products

    NASA Technical Reports Server (NTRS)

    Stroeve, Julienne C.; Markus, Thorsten; Maslanik, James A.; Cavalieri, Donald J.; Gasiewski, Albin J.; Heinrichs, John F.; Holmgren, Jon; Perovich, Donald K.; Sturm, Matthew

    2006-01-01

    This paper examines the sensitivity of Advanced Microwave Scanning Radiometer (AMSR-E) brightness temperatures (Tbs) to surface roughness by a using radiative transfer model to simulate AMSR-E Tbs as a function of incidence angle at which the surface is viewed. The simulated Tbs are then used to examine the influence that surface roughness has on two operational sea ice algorithms, namely: 1) the National Aeronautics and Space Administration Team (NT) algorithm and 2) the enhanced NT algorithm, as well as the impact of roughness on the AMSR-E snow depth algorithm. Surface snow and ice data collected during the AMSR-Ice03 field campaign held in March 2003 near Barrow, AK, were used to force the radiative transfer model, and resultant modeled Tbs are compared with airborne passive microwave observations from the Polarimetric Scanning Radiometer. Results indicate that passive microwave Tbs are very sensitive even to small variations in incidence angle, which can cause either an over or underestimation of the true amount of sea ice in the pixel area viewed. For example, this paper showed that if the sea ice areas modeled in this paper mere assumed to be completely smooth, sea ice concentrations were underestimated by nearly 14% using the NT sea ice algorithm and by 7% using the enhanced NT algorithm. A comparison of polarization ratios (PRs) at 10.7,18.7, and 37 GHz indicates that each channel responds to different degrees of surface roughness and suggests that the PR at 10.7 GHz can be useful for identifying locations of heavily ridged or rubbled ice. Using the PR at 10.7 GHz to derive an "effective" viewing angle, which is used as a proxy for surface roughness, resulted in more accurate retrievals of sea ice concentration for both algorithms. The AMSR-E snow depth algorithm was found to be extremely sensitive to instrument calibration and sensor viewing angle, and it is concluded that more work is needed to investigate the sensitivity of the gradient ratio at 37 and 18.7 GHz to these factors to improve snow depth retrievals from spaceborne passive microwave sensors.

  4. A Dual-Wavelength Radar Technique to Detect Hydrometeor Phases

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert

    2016-01-01

    This study is aimed at investigating the feasibility of a Ku- and Ka-band space/air-borne dual wavelength radar algorithm to discriminate various phase states of precipitating hydrometeors. A phase-state classification algorithm has been developed from the radar measurements of snow, mixed-phase and rain obtained from stratiform storms. The algorithm, presented in the form of the look-up table that links the Ku-band radar reflectivities and dual-frequency ratio (DFR) to the phase states of hydrometeors, is checked by applying it to the measurements of the Jet Propulsion Laboratory, California Institute of Technology, Airborne Precipitation Radar Second Generation (APR-2). In creating the statistically-based phase look-up table, the attenuation corrected (or true) radar reflectivity factors are employed, leading to better accuracy in determining the hydrometeor phase. In practice, however, the true radar reflectivities are not always available before the phase states of the hydrometeors are determined. Therefore, it is desirable to make use of the measured radar reflectivities in classifying the phase states. To do this, a phase-identification procedure is proposed that uses only measured radar reflectivities. The procedure is then tested using APR-2 airborne radar data. Analysis of the classification results in stratiform rain indicates that the regions of snow, mixed-phase and rain derived from the phase-identification algorithm coincide reasonably well with those determined from the measured radar reflectivities and linear depolarization ratio (LDR).

  5. Automated classification and quantitative analysis of arterial and venous vessels in fundus images

    NASA Astrophysics Data System (ADS)

    Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng

    2018-02-01

    It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

  6. Clinical algorithms for the diagnosis and prognosis of interstitial lung disease in systemic sclerosis.

    PubMed

    Hax, Vanessa; Bredemeier, Markus; Didonet Moro, Ana Laura; Pavan, Thaís Rohde; Vieira, Marcelo Vasconcellos; Pitrez, Eduardo Hennemann; da Silva Chakr, Rafael Mendonça; Xavier, Ricardo Machado

    2017-10-01

    Interstitial lung disease (ILD) is currently the primary cause of death in systemic sclerosis (SSc). Thoracic high-resolution computed tomography (HRCT) is considered the gold standard for diagnosis. Recent studies have proposed several clinical algorithms to predict the diagnosis and prognosis of SSc-ILD. To test the clinical algorithms to predict the presence and prognosis of SSc-ILD and to evaluate the association of extent of ILD with mortality in a cohort of SSc patients. Retrospective cohort study, including 177 SSc patients assessed by clinical evaluation, laboratory tests, pulmonary function tests, and HRCT. Three clinical algorithms, combining lung auscultation, chest radiography, and percentage predicted forced vital capacity (FVC), were applied for the diagnosis of different extents of ILD on HRCT. Univariate and multivariate Cox proportional models were used to analyze the association of algorithms and the extent of ILD on HRCT with the risk of death using hazard ratios (HR). The prevalence of ILD on HRCT was 57.1% and 79 patients died (44.6%) in a median follow-up of 11.1 years. For identification of ILD with extent ≥10% and ≥20% on HRCT, all algorithms presented a high sensitivity (>89%) and a very low negative likelihood ratio (<0.16). For prognosis, survival was decreased for all algorithms, especially the algorithm C (HR = 3.47, 95% CI: 1.62-7.42), which identified the presence of ILD based on crackles on lung auscultation, findings on chest X-ray, or FVC <80%. Extensive disease as proposed by Goh et al. (extent of ILD > 20% on HRCT or, in indeterminate cases, FVC < 70%) had a significantly higher risk of death (HR = 3.42, 95% CI: 2.12-5.52). Survival was not different between patients with extent of 10% or 20% of ILD on HRCT, and analysis of 10-year mortality suggested that a threshold of 10% may also have a good predictive value for mortality. However, there is no clear cutoff above which mortality is sharply increased. Clinical algorithms had a good diagnostic performance for extents of SSc-ILD on HRCT with clinical and prognostic relevance (≥10% and ≥20%), and were also strongly related to mortality. Non-HRCT-based algorithms could be useful when HRCT is not available. This is the first study to replicate the prognostic algorithm proposed by Goh et al. in a developing country. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. The methodology of the gas turbine efficiency calculation

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Job, Marcin; Brzęczek, Mateusz; Nawrat, Krzysztof; Mędrych, Janusz

    2016-12-01

    In the paper a calculation methodology of isentropic efficiency of a compressor and turbine in a gas turbine installation on the basis of polytropic efficiency characteristics is presented. A gas turbine model is developed into software for power plant simulation. There are shown the calculation algorithms based on iterative model for isentropic efficiency of the compressor and for isentropic efficiency of the turbine based on the turbine inlet temperature. The isentropic efficiency characteristics of the compressor and the turbine are developed by means of the above mentioned algorithms. The gas turbine development for the high compressor ratios was the main driving force for this analysis. The obtained gas turbine electric efficiency characteristics show that an increase of pressure ratio above 50 is not justified due to the slight increase in the efficiency with a significant increase of turbine inlet combustor outlet and temperature.

  8. Evaluation of a new motion correction algorithm in PET/CT: combining the entire acquired PET data to create a single three-dimensional motion-corrected PET/CT image.

    PubMed

    Minamimoto, Ryogo; Mitsumoto, Takuya; Miyata, Yoko; Sunaoka, Fumio; Morooka, Miyako; Okasaki, Momoko; Iagaru, Andrei; Kubota, Kazuo

    2016-02-01

    This study evaluated the potential of Q.Freeze algorithm for reducing motion artifacts, in comparison with ungated imaging (UG) and respiratory-gated imaging (RG). Twenty-nine patients with 53 lesions who had undergone RG F-FDG PET/CT were included in this study. Using PET list mode data, five series of PET images [UG, RG, and QF images with an acquisition duration of 3 min (QF3), 5 min (QF5), and 10 min (QF10)] were reconstructed retrospectively. The image quality was evaluated first. Next, quantitative metrics [maximum standardized uptake value (SUVmax), mean standardized uptake value (SUVmean), SD, metabolic tumor volume, signal to noise ratio, or lesion to background ratio] were calculated for the liver, background, and each lesion, and the results were compared across the series. QF10 and QF5 showed better image quality compared with all other images. SUVmax in the liver, background, and lesions was lower with QF10 and QF5 than with the others, but there were no statistically significant differences in SUVmean and the lesion to background ratios. The SD with UG and RG was significantly higher than that with QF5 and QF10. The metabolic tumor volume in QF3 and QF5 was significantly lower than that in UG. The Q.Freeze algorithm can improve the quality of PET imaging compared with RG and UG.

  9. Membership-degree preserving discriminant analysis with applications to face recognition.

    PubMed

    Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun

    2013-01-01

    In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.

  10. Estimation of Comfort/Disconfort Based on EEG in Massage by Use of Clustering according to Correration and Incremental Learning type NN

    NASA Astrophysics Data System (ADS)

    Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira

    Authors proposed the estimation method combining k-means algorithm and NN for evaluating massage. However, this estimation method has a problem that discrimination ratio is decreased to new user. There are two causes of this problem. One is that generalization of NN is bad. Another one is that clustering result by k-means algorithm has not high correlation coefficient in a class. Then, this research proposes k-means algorithm according to correlation coefficient and incremental learning for NN. The proposed k-means algorithm is method included evaluation function based on correlation coefficient. Incremental learning is method that NN is learned by new data and initialized weight based on the existing data. The effect of proposed methods are verified by estimation result using EEG data when testee is given massage.

  11. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  12. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.

    PubMed

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-07-29

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  13. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  14. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  15. Computer controlled synchronous shifting of an automatic transmission

    DOEpatents

    Davis, Roy I.; Patil, Prabhakar B.

    1989-01-01

    A multiple forward speed automatic transmission produces its lowest forward speed ratio when a hydraulic clutch and hydraulic brake are disengaged and a one-way clutch connects a ring gear to the transmission casing. Second forward speed ratio results when the hydraulic clutch is engaged to connect the ring gear to the planetary carrier of a second gear set. Reverse drive and regenerative operation result when an hydraulic brake fixes the planetary and the direction of power flow is reversed. Various sensors produce signals representing the torque at the output of the transmission or drive wheels, the speed of the power source, and the hydraulic pressure applied to a clutch and brake. A control algorithm produces input data representing a commanded upshift, a commanded downshift, a commanded transmission output torque, and commanded power source speed. A microprocessor processes the inputs and produces a response to them in accordance with the execution of a control algorithm. Output or response signals cause selective engagement and disengagement of the clutch and brake at a rate that satisfies the requirements for a short gear ratio change and smooth torque transfer between the friction elements.

  16. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  17. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  18. A GPU-Accelerated 3-D Coupled Subsample Estimation Algorithm for Volumetric Breast Strain Elastography.

    PubMed

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-04-01

    Our primary objective of this paper was to extend a previously published 2-D coupled subsample tracking algorithm for 3-D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3-D coupled subsample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking phantom and in vivo breast ultrasound data. The performance of this 3-D subsample tracking algorithm was compared with the conventional 3-D quadratic subsample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3-D subsample estimation algorithm can provide high-quality strain data (i.e., high correlation between the predeformation and the motion-compensated postdeformation radio frequency echo data and high contrast-to-noise ratio strain images), as compared with the conventional 3-D quadratic subsample algorithm. Using the GPU implementation of the 3-D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 s per volume [2.5 cm ×2.5 cm ×2.5 cm]).

  19. Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN

    NASA Astrophysics Data System (ADS)

    Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo

    2017-10-01

    Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.

  20. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  1. Evaluation of Dynamic Channel and Power Assignment for Cognitive Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syed A. Ahmad; Umesh Shukla; Ryan E. Irwin

    2011-03-01

    In this paper, we develop a unifying optimization formulation to describe the Dynamic Channel and Power Assignment (DCPA) problem and evaluation method for comparing DCPA algorithms. DCPA refers to the allocation of transmit power and frequency channels to links in a cognitive network so as to maximize the total number of feasible links while minimizing the aggregate transmit power. We apply our evaluation method to five algorithms representative of DCPA used in literature. This comparison illustrates the tradeoffs between control modes (centralized versus distributed) and channel/power assignment techniques. We estimate the complexity of each algorithm. Through simulations, we evaluate themore » effectiveness of the algorithms in achieving feasible link allocations in the network, as well as their power efficiency. Our results indicate that, when few channels are available, the effectiveness of all algorithms is comparable and thus the one with smallest complexity should be selected. The Least Interfering Channel and Iterative Power Assignment (LICIPA) algorithm does not require cross-link gain information, has the overall lowest run time, and highest feasibility ratio of all the distributed algorithms; however, this comes at a cost of higher average power per link.« less

  2. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  3. Research on gait-based human identification

    NASA Astrophysics Data System (ADS)

    Li, Youguo

    Gait recognition refers to automatic identification of individual based on his/her style of walking. This paper proposes a gait recognition method based on Continuous Hidden Markov Model with Mixture of Gaussians(G-CHMM). First, we initialize a Gaussian mix model for training image sequence with K-means algorithm, then train the HMM parameters using a Baum-Welch algorithm. These gait feature sequences can be trained and obtain a Continuous HMM for every person, therefore, the 7 key frames and the obtained HMM can represent each person's gait sequence. Finally, the recognition is achieved by Front algorithm. The experiments made on CASIA gait databases obtain comparatively high correction identification ratio and comparatively strong robustness for variety of bodily angle.

  4. Estimation of the Arrival Time and Duration of a Radio Signal with Unknown Amplitude and Initial Phase

    NASA Astrophysics Data System (ADS)

    Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.

    2018-05-01

    We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.

  5. Model of a Frame of Dynamic Routing and Its Equilibrium

    NASA Astrophysics Data System (ADS)

    Zhang, Shu; Yuan, Yuan; Xu, Jian

    Dynamic routing algorithm based on the shortest path principle is criticized due to the oscillation induced by such routing scheme. In the present work, we propose the model of TCP/RED algorithm by a new frame of dynamic routing, based on the measurement of occupation ratio of router buffer for different links, which only requires the information of the queue size at the buffer of the router, to stabilize the system. We classify several types of equilibrium and employ the numerical method to study the stability of the steady state. Our numerical results show that the careful selection of the parameters characterizing the dynamic routing algorithm can stabilize the system in some cases.

  6. A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    White, J. A.; Morrison, J. H.

    1999-01-01

    A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.

  7. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms.

    PubMed

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2014-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies.

  8. Evaluation of Demons- and FEM-Based Registration Algorithms for Lung Cancer.

    PubMed

    Yang, Juan; Li, Dengwang; Yin, Yong; Zhao, Fen; Wang, Hongjun

    2016-04-01

    We evaluated and compared the accuracy of 2 deformable image registration algorithms in 4-dimensional computed tomography images for patients with lung cancer. Ten patients with non-small cell lung cancer or small cell lung cancer were enrolled in this institutional review board-approved study. The displacement vector fields relative to a specific reference image were calculated by using the diffeomorphic demons (DD) algorithm and the finite element method (FEM)-based algorithm. The registration accuracy was evaluated by using normalized mutual information (NMI), the sum of squared intensity difference (SSD), modified Hausdorff distance (dH_M), and ratio of gross tumor volume (rGTV) difference between reference image and deformed phase image. We also compared the registration speed of the 2 algorithms. Of all patients, the FEM-based algorithm showed stronger ability in aligning 2 images than the DD algorithm. The means (±standard deviation) of NMI were 0.86 (±0.05) and 0.90 (±0.05) using the DD algorithm and the FEM-based algorithm, respectively. The means of SSD were 0.006 (±0.003) and 0.003 (±0.002) using the DD algorithm and the FEM-based algorithm, respectively. The means of dH_M were 0.04 (±0.02) and 0.03 (±0.03) using the DD algorithm and the FEM-based algorithm, respectively. The means of rGTV were 3.9% (±1.01%) and 2.9% (±1.1%) using the DD algorithm and the FEM-based algorithm, respectively. However, the FEM-based algorithm costs a longer time than the DD algorithm, with the average running time of 31.4 minutes compared to 21.9 minutes for all patients. The preliminary results showed that the FEM-based algorithm was more accurate than the DD algorithm while compromised with the registration speed. © The Author(s) 2015.

  9. Development of algorithms for detecting citrus canker based on hyperspectral reflectance imaging.

    PubMed

    Li, Jiangbo; Rao, Xiuqin; Ying, Yibin

    2012-01-15

    Automated discrimination of fruits with canker from other fruit with normal surface and different type of peel defects has become a helpful task to enhance the competitiveness and profitability of the citrus industry. Over the last several years, hyperspectral imaging technology has received increasing attention in the agricultural products inspection field. This paper studied the feasibility of classification of citrus canker from other peel conditions including normal surface and nine peel defects by hyperspectal imaging. A combination algorithm based on principal component analysis and the two-band ratio (Q(687/630)) method was proposed. Since fewer wavelengths were desired in order to develop a rapid multispectral imaging system, the canker classification performance of the two-band ratio (Q(687/630)) method alone was also evaluated. The proposed combination approach and two-band ratio method alone resulted in overall classification accuracy for training set samples and test set samples of 99.5%, 84.5% and 98.2%, 82.9%, respectively. The proposed combination approach was more efficient for classifying canker against various conditions under reflectance hyperspectral imagery. However, the two-band ratio (Q(687/630)) method alone also demonstrated effectiveness in discriminating citrus canker from normal fruit and other peel diseases except for copper burn and anthracnose. Copyright © 2011 Society of Chemical Industry.

  10. Phase retrieval from intensity-only data by relative entropy minimization.

    PubMed

    Deming, Ross W

    2007-11-01

    A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.

  11. Thermodynamic properties of solvated peptides from selective integrated tempering sampling with a new weighting factor estimation algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Lin; Xie, Liangxu; Yang, Mingjun

    2017-04-01

    Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.

  12. Robust transceiver design for reciprocal M × N interference channel based on statistical linearization approximation

    NASA Astrophysics Data System (ADS)

    Mayvan, Ali D.; Aghaeinia, Hassan; Kazemi, Mohammad

    2017-12-01

    This paper focuses on robust transceiver design for throughput enhancement on the interference channel (IC), under imperfect channel state information (CSI). In this paper, two algorithms are proposed to improve the throughput of the multi-input multi-output (MIMO) IC. Each transmitter and receiver has, respectively, M and N antennas and IC operates in a time division duplex mode. In the first proposed algorithm, each transceiver adjusts its filter to maximize the expected value of signal-to-interference-plus-noise ratio (SINR). On the other hand, the second algorithm tries to minimize the variances of the SINRs to hedge against the variability due to CSI error. Taylor expansion is exploited to approximate the effect of CSI imperfection on mean and variance. The proposed robust algorithms utilize the reciprocity of wireless networks to optimize the estimated statistical properties in two different working modes. Monte Carlo simulations are employed to investigate sum rate performance of the proposed algorithms and the advantage of incorporating variation minimization into the transceiver design.

  13. Magnetic resonance image restoration via dictionary learning under spatially adaptive constraints.

    PubMed

    Wang, Shanshan; Xia, Yong; Dong, Pei; Feng, David Dagan; Luo, Jianhua; Huang, Qiu

    2013-01-01

    This paper proposes a spatially adaptive constrained dictionary learning (SAC-DL) algorithm for Rician noise removal in magnitude magnetic resonance (MR) images. This algorithm explores both the strength of dictionary learning to preserve image structures and the robustness of local variance estimation to remove signal-dependent Rician noise. The magnitude image is first separated into a number of partly overlapping image patches. The statistics of each patch are collected and analyzed to obtain a local noise variance. To better adapt to Rician noise, a correction factor is formulated with the local signal-to-noise ratio (SNR). Finally, the trained dictionary is used to denoise each image patch under spatially adaptive constraints. The proposed algorithm has been compared to the popular nonlocal means (NLM) filtering and unbiased NLM (UNLM) algorithm on simulated T1-weighted, T2-weighted and PD-weighted MR images. Our results suggest that the SAC-DL algorithm preserves more image structures while effectively removing the noise than NLM and it is also superior to UNLM at low noise levels.

  14. An algorithm to improve speech recognition in noise for hearing-impaired listeners

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang

    2013-01-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  15. Comparison between variable and fixed dwell-time PN acquisition algorithms. [for synchronization in pseudonoise spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1981-01-01

    Pseudo noise (PN) spread spectrum systems require a very accurate alignment between the PN code epochs at the transmitter and receiver. This synchronism is typically established through a two-step algorithm, including a coarse synchronization procedure and a fine synchronization procedure. A standard approach for the coarse synchronization is a sequential search over all code phases. The measurement of the power in the filtered signal is used to either accept or reject the code phase under test as the phase of the received PN code. This acquisition strategy, called a single dwell-time system, has been analyzed by Holmes and Chen (1977). A synopsis of the field of sequential analysis as it applies to the PN acquisition problem is provided. From this, the implementation of the variable dwell time algorithm as a sequential probability ratio test is developed. The performance of this algorithm is compared to the optimum detection algorithm and to the fixed dwell-time system.

  16. The NLO jet vertex in the small-cone approximation for kt and cone algorithms

    NASA Astrophysics Data System (ADS)

    Colferai, D.; Niccoli, A.

    2015-04-01

    We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algorithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet "radius" R = 0 .5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.

  17. Optical image hiding based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  18. LFQC: a lossless compression algorithm for FASTQ files

    PubMed Central

    Nicolae, Marius; Pathak, Sudipta; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu PMID:26093148

  19. Application of adaptive filters in denoising magnetocardiogram signals

    NASA Astrophysics Data System (ADS)

    Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.

    2017-05-01

    Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.

  20. A new Mumford-Shah total variation minimization based model for sparse-view x-ray computed tomography image reconstruction.

    PubMed

    Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong

    2018-04-12

    Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.

  1. OCT angiography by absolute intensity difference applied to normal and diseased human retinas

    PubMed Central

    Ruminski, Daniel; Sikorski, Bartosz L.; Bukowska, Danuta; Szkulmowski, Maciej; Krawiec, Krzysztof; Malukiewicz, Grazyna; Bieganowski, Lech; Wojtkowski, Maciej

    2015-01-01

    We compare four optical coherence tomography techniques for noninvasive visualization of microcapillary network in the human retina and murine cortex. We perform phantom studies to investigate contrast-to-noise ratio for angiographic images obtained with each of the algorithm. We show that the computationally simplest absolute intensity difference angiographic OCT algorithm that bases only on two cross-sectional intensity images may be successfully used in clinical study of healthy eyes and eyes with diabetic maculopathy and branch retinal vein occlusion. PMID:26309740

  2. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  3. Performance of the split-symbol moments SNR estimator in the presence of inter-symbol interference

    NASA Technical Reports Server (NTRS)

    Shah, B.; Hinedi, S.

    1989-01-01

    The Split-Symbol Moments Estimator (SSME) is an algorithm that is designed to estimate symbol signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise (AWGN). The performance of the SSME algorithm in band-limited channels is examined. The effects of the resulting inter-symbol interference (ISI) are quantified. All results obtained are in closed form and can be easily evaluated numerically for performance prediction purposes. Furthermore, they are validated through digital simulations.

  4. Hybrid phosphorescence and fluorescence native spectroscopy for breast cancer detection.

    PubMed

    Alimova, Alexandra; Katz, A; Sriramoju, Vidyasagar; Budansky, Yuri; Bykov, Alexei A; Zeylikovich, Roman; Alfano, R R

    2007-01-01

    Fluorescence and phosphorescence measurements are performed on normal and malignant ex vivo human breast tissues using UV LED and xenon lamp excitation. Tryptophan (trp) phosphorescence intensity is higher in both normal glandular and adipose tissue when compared to malignant tissue. An algorithm based on the ratio of trp fluorescence intensity at 345 nm to phosphorescence intensity at 500 nm is successfully used to separate normal from malignant tissue types. Normal specimens consistently exhibited a low I(345)I(500) ratio (<10), while for malignant specimens, the I(345)I(500) ratio is consistently high (>15). The ratio analysis correlates well with histopathology. Intensity ratio maps with a spatial resolution of 0.5 mm are generated in which local regions of malignancy could be identified.

  5. Speedup of minimum discontinuity phase unwrapping algorithm with a reference phase distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yihang; Han, Yu; Li, Fengjiao; Zhang, Qican

    2018-06-01

    In three-dimensional (3D) shape measurement based on phase analysis, the phase analysis process usually produces a wrapped phase map ranging from - π to π with some 2 π discontinuities, and thus a phase unwrapping algorithm is necessary to recover the continuous and nature phase map from which 3D height distribution can be restored. Usually, the minimum discontinuity phase unwrapping algorithm can be used to solve many different kinds of phase unwrapping problems, but its main drawback is that it requires a large amount of computations and has low efficiency in searching for the improving loop within the phase's discontinuity area. To overcome this drawback, an improvement to speedup of the minimum discontinuity phase unwrapping algorithm by using the phase distribution on reference plane is proposed. In this improved algorithm, before the minimum discontinuity phase unwrapping algorithm is carried out to unwrap phase, an integer number K was calculated from the ratio of the wrapped phase to the nature phase on a reference plane. And then the jump counts of the unwrapped phase can be reduced by adding 2K π, so the efficiency of the minimum discontinuity phase unwrapping algorithm is significantly improved. Both simulated and experimental data results verify the feasibility of the proposed improved algorithm, and both of them clearly show that the algorithm works very well and has high efficiency.

  6. Network Coded Cooperative Communication in a Real-Time Wireless Hospital Sensor Network.

    PubMed

    Prakash, R; Balaji Ganesh, A; Sivabalan, Somu

    2017-05-01

    The paper presents a network coded cooperative communication (NC-CC) enabled wireless hospital sensor network architecture for monitoring health as well as postural activities of a patient. A wearable device, referred as a smartband is interfaced with pulse rate, body temperature sensors and an accelerometer along with wireless protocol services, such as Bluetooth and Radio-Frequency transceiver and Wi-Fi. The energy efficiency of wearable device is improved by embedding a linear acceleration based transmission duty cycling algorithm (NC-DRDC). The real-time demonstration is carried-out in a hospital environment to evaluate the performance characteristics, such as power spectral density, energy consumption, signal to noise ratio, packet delivery ratio and transmission offset. The resource sharing and energy efficiency features of network coding technique are improved by proposing an algorithm referred as network coding based dynamic retransmit/rebroadcast decision control (LA-TDC). From the experimental results, it is observed that the proposed LA-TDC algorithm reduces network traffic and end-to-end delay by an average of 27.8% and 21.6%, respectively than traditional network coded wireless transmission. The wireless architecture is deployed in a hospital environment and results are then successfully validated.

  7. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  8. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  9. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  10. Simulation of large particle transport near the surface under stable conditions: comparison with the Hanford tracer experiments

    NASA Astrophysics Data System (ADS)

    Kim, Eugene; Larson, Timothy

    A plume model is presented describing the downwind transport of large particles (1-100 μm) under stable conditions. The model includes both vertical variations in wind speed and turbulence intensity as well as an algorithm for particle deposition at the surface. Model predictions compare favorably with the Hanford single and dual tracer experiments of crosswind integrated concentration (for particles: relative bias=-0.02 and 0.16, normalized mean square error=0.61 and 0.14, for the single and dual tracer experiments, respectively), whereas the US EPA's fugitive dust model consistently overestimates the observed concentrations at downwind distances beyond several hundred meters (for particles: relative bias=0.31 and 2.26, mean square error=0.42 and 1.71, respectively). For either plume model, the measured ratio of particle to gas concentration is consistently overestimated when using the deposition velocity algorithm of Sehmel and Hodgson (1978. DOE Report PNL-SA-6721, Pacific Northwest Laboratories, Richland, WA). In contrast, these same ratios are predicted with relatively little bias when using the algorithm of Kim et al. (2000. Atmospheric Environment 34 (15), 2387-2397).

  11. An overview of remote sensing of chlorophyll fluorescence

    NASA Astrophysics Data System (ADS)

    Xing, Xiao-Gang; Zhao, Dong-Zhi; Liu, Yu-Guang; Yang, Jian-Hong; Xiu, Peng; Wang, Lin

    2007-03-01

    Besides empirical algorithms with the blue-green ratio, the algorithms based on fluorescence are also important and valid methods for retrieving chlorophyll-a concentration in the ocean waters, especially for Case II waters and the sea with algal blooming. This study reviews the history of initial cognitions, investigations and detailed approaches towards chlorophyll fluorescence, and then introduces the biological mechanism of fluorescence remote sensing and main spectral characteristics such as the positive correlation between fluorescence and chlorophyll concentration, the red shift phenomena. Meanwhile, there exist many influence factors that increase complexity of fluorescence remote sensing, such as fluorescence quantum yield, physiological status of various algae, substances with related optical property in the ocean, atmospheric absorption etc. Based on these cognitions, scientists have found two ways to calculate the amount of fluorescence detected by ocean color sensors: fluorescence line height and reflectance ratio. These two ways are currently the foundation for retrieval of chlorophyl l - a concentration in the ocean. As the in-situ measurements and synchronous satellite data are continuously being accumulated, the fluorescence remote sensing of chlorophyll-a concentration in Case II waters should be recognized more thoroughly and new algorithms could be expected.

  12. Detection and tracking of a moving target using SAR images with the particle filter-based track-before-detect algorithm.

    PubMed

    Gao, Han; Li, Jingwen

    2014-06-19

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.

  13. Detection and Tracking of a Moving Target Using SAR Images with the Particle Filter-Based Track-Before-Detect Algorithm

    PubMed Central

    Gao, Han; Li, Jingwen

    2014-01-01

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640

  14. Dynamic optical resource allocation for mobile core networks with software defined elastic optical networking.

    PubMed

    Zhao, Yongli; Chen, Zhendong; Zhang, Jie; Wang, Xinbo

    2016-07-25

    Driven by the forthcoming of 5G mobile communications, the all-IP architecture of mobile core networks, i.e. evolved packet core (EPC) proposed by 3GPP, has been greatly challenged by the users' demands for higher data rate and more reliable end-to-end connection, as well as operators' demands for low operational cost. These challenges can be potentially met by software defined optical networking (SDON), which enables dynamic resource allocation according to the users' requirement. In this article, a novel network architecture for mobile core network is proposed based on SDON. A software defined network (SDN) controller is designed to realize the coordinated control over different entities in EPC networks. We analyze the requirement of EPC-lightpath (EPCL) in data plane and propose an optical switch load balancing (OSLB) algorithm for resource allocation in optical layer. The procedure of establishment and adjustment of EPCLs is demonstrated on a SDON-based EPC testbed with extended OpenFlow protocol. We also evaluate the OSLB algorithm through simulation in terms of bandwidth blocking ratio, traffic load distribution, and resource utilization ratio compared with link-based load balancing (LLB) and MinHops algorithms.

  15. MCMAC-cVT: a novel on-line associative memory based CVT transmission control system.

    PubMed

    Ang, K K; Quek, C; Wahab, A

    2002-03-01

    This paper describes a novel application of an associative memory called the Modified Cerebellar Articulation Controller (MCMAC) (Int. J. Artif. Intell. Engng, 10 (1996) 135) in a continuous variable transmission (CVT) control system. It allows the on-line tuning of the associative memory and produces an effective gain-schedule for the automatic selection of the CVT gear ratio. Various control algorithms are investigated to control the CVT gear ratio to maintain the engine speed within a narrow range of efficient operating speed independently of the vehicle velocity. Extensive simulation results are presented to evaluate the control performance of a direct digital PID control algorithm with auto-tuning (Trans. ASME, 64 (1942)) and anti-windup mechanism. In particular, these results are contrasted against the control performance produced using the MCMAC (Int. J. Artif. Intell. Engng, 10 (1996) 135) with momentum, neighborhood learning and Averaged Trapezoidal Output (MCMAC-ATO) as the neural control algorithm for controlling the CVT. Simulation results are presented that show the reduced control fluctuations and improved learning capability of the MCMAC-ATO without incurring greater memory requirement. In particular, MCMAC-ATO is able to learn and control the CVT simultaneously while still maintaining acceptable control performance.

  16. Link Scheduling Algorithm with Interference Prediction for Multiple Mobile WBANs

    PubMed Central

    Le, Thien T. T.

    2017-01-01

    As wireless body area networks (WBANs) become a key element in electronic healthcare (e-healthcare) systems, the coexistence of multiple mobile WBANs is becoming an issue. The network performance is negatively affected by the unpredictable movement of the human body. In such an environment, inter-WBAN interference can be caused by the overlapping transmission range of nearby WBANs. We propose a link scheduling algorithm with interference prediction (LSIP) for multiple mobile WBANs, which allows multiple mobile WBANs to transmit at the same time without causing inter-WBAN interference. In the LSIP, a superframe includes the contention access phase using carrier sense multiple access with collision avoidance (CSMA/CA) and the scheduled phase using time division multiple access (TDMA) for non-interfering nodes and interfering nodes, respectively. For interference prediction, we define a parameter called interference duration as the duration during which disparate WBANs interfere with each other. The Bayesian model is used to estimate and classify the interference using a signal to interference plus noise ratio (SINR) and the number of neighboring WBANs. The simulation results show that the proposed LSIP algorithm improves the packet delivery ratio and throughput significantly with acceptable delay. PMID:28956827

  17. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  18. Adaptive noise correction of dual-energy computed tomography images.

    PubMed

    Maia, Rafael Simon; Jacob, Christian; Hara, Amy K; Silva, Alvin C; Pavlicek, William; Mitchell, J Ross

    2016-04-01

    Noise reduction in material density images is a necessary preprocessing step for the correct interpretation of dual-energy computed tomography (DECT) images. In this paper we describe a new method based on a local adaptive processing to reduce noise in DECT images An adaptive neighborhood Wiener (ANW) filter was implemented and customized to use local characteristics of material density images. The ANW filter employs a three-level wavelet approach, combined with the application of an anisotropic diffusion filter. Material density images and virtual monochromatic images are noise corrected with two resulting noise maps. The algorithm was applied and quantitatively evaluated in a set of 36 images. From that set of images, three are shown here, and nine more are shown in the online supplementary material. Processed images had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than the raw material density images. The average improvements in SNR and CNR for the material density images were 56.5 and 54.75%, respectively. We developed a new DECT noise reduction algorithm. We demonstrate throughout a series of quantitative analyses that the algorithm improves the quality of material density images and virtual monochromatic images.

  19. Real-time fluorescence target/background (T/B) ratio calculation in multimodal endoscopy for detecting GI tract cancer

    NASA Astrophysics Data System (ADS)

    Jiang, Yang; Gong, Yuanzheng; Wang, Thomas D.; Seibel, Eric J.

    2017-02-01

    Multimodal endoscopy, with fluorescence-labeled probes binding to overexpressed molecular targets, is a promising technology to visualize early-stage cancer. T/B ratio is the quantitative analysis used to correlate fluorescence regions to cancer. Currently, T/B ratio calculation is post-processing and does not provide real-time feedback to the endoscopist. To achieve real-time computer assisted diagnosis (CAD), we establish image processing protocols for calculating T/B ratio and locating high-risk fluorescence regions for guiding biopsy and therapy in Barrett's esophagus (BE) patients. Methods: Chan-Vese algorithm, an active contour model, is used to segment high-risk regions in fluorescence videos. A semi-implicit gradient descent method was applied to minimize the energy function of this algorithm and evolve the segmentation. The surrounding background was then identified using morphology operation. The average T/B ratio was computed and regions of interest were highlighted based on user-selected thresholding. Evaluation was conducted on 50 fluorescence videos acquired from clinical video recordings using a custom multimodal endoscope. Results: With a processing speed of 2 fps on a laptop computer, we obtained accurate segmentation of high-risk regions examined by experts. For each case, the clinical user could optimize target boundary by changing the penalty on area inside the contour. Conclusion: Automatic and real-time procedure of calculating T/B ratio and identifying high-risk regions of early esophageal cancer was developed. Future work will increase processing speed to <5 fps, refine the clinical interface, and apply to additional GI cancers and fluorescence peptides.

  20. Indications for MARS-MRI in Patients Treated With Metal-on-Metal Hip Resurfacing Arthroplasty.

    PubMed

    Connelly, James W; Galea, Vincent P; Matuszak, Sean J; Madanat, Rami; Muratoglu, Orhun; Malchau, Henrik

    2018-06-01

    Currently, there are no universally accepted guidelines on when to obtain metal artifact reduction sequence magnetic resonance imaging (MARS-MRI) in metal-on-metal (MoM) hip resurfacing arthroplasty (HRA) patients. Our primary aims were to identify which patient and clinical factors are predictive of adverse local tissue reaction (ALTR) and create an algorithm for indicating MARS-MRI in patients with Articular Surface Replacement (ASR) HRA. The secondary aim was to compare our algorithm to existing guidelines on when to perform MARS-MRI in MoM HRA patients. The study cohort consisted of 182 patients with unilateral ASR HRA from a prospective, multicenter study. Subjects received MARS-MRI at a mean of 7.8 years from surgery, regardless of symptoms. We determined which variables were predictive of ALTR and generated cutoffs for each variable. Finally, we created an algorithm to predict ALTR and indicate MARS-MRI in ASR HRA patients using these cutoffs and compared it to existing guidelines. We found high blood cobalt (Co) (odds ratio = 1.070; P = .011) and high blood chromium (Cr) (odds ratio = 1.162; P = .002) to be significant predictors of ALTR presence. Our algorithm using a blood Co cutoff of 1.15 ppb and a Cr cutoff of 1.09 ppb achieved 96.6% sensitivity and 35.3% specificity in predicting ALTR, which outperformed the existing guidelines. Blood Co and Cr levels are predictive of ALTR in ASR HRA patients. Our algorithm considering blood Co and Cr levels predicts ALTR in ASR HRA patients with higher sensitivity than previously established guidelines. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Increased prognostic accuracy of TBI when a brain electrical activity biomarker is added to loss of consciousness (LOC).

    PubMed

    Hack, Dallas; Huff, J Stephen; Curley, Kenneth; Naunheim, Roseanne; Ghosh Dastidar, Samanwoy; Prichep, Leslie S

    2017-07-01

    Extremely high accuracy for predicting CT+ traumatic brain injury (TBI) using a quantitative EEG (QEEG) based multivariate classification algorithm was demonstrated in an independent validation trial, in Emergency Department (ED) patients, using an easy to use handheld device. This study compares the predictive power using that algorithm (which includes LOC and amnesia), to the predictive power of LOC alone or LOC plus traumatic amnesia. ED patients 18-85years presenting within 72h of closed head injury, with GSC 12-15, were study candidates. 680 patients with known absence or presence of LOC were enrolled (145 CT+ and 535 CT- patients). 5-10min of eyes closed EEG was acquired using the Ahead 300 handheld device, from frontal and frontotemporal regions. The same classification algorithm methodology was used for both the EEG based and the LOC based algorithms. Predictive power was evaluated using area under the ROC curve (AUC) and odds ratios. The QEEG based classification algorithm demonstrated significant improvement in predictive power compared with LOC alone, both in improved AUC (83% improvement) and odds ratio (increase from 4.65 to 16.22). Adding RGA and/or PTA to LOC was not improved over LOC alone. Rapid triage of TBI relies on strong initial predictors. Addition of an electrophysiological based marker was shown to outperform report of LOC alone or LOC plus amnesia, in determining risk of an intracranial bleed. In addition, ease of use at point-of-care, non-invasive, and rapid result using such technology suggests significant value added to standard clinical prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    PubMed Central

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  3. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials.

    PubMed

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies.

  4. Contrast features of breast cancer in frequency-domain laser scanning mammography

    NASA Astrophysics Data System (ADS)

    Moesta, K. Thomas; Fantini, Sergio; Jess, Helge; Totkas, Susan; Franceschini, Maria-Angela; Kaschke, Michael; Schlag, Peter M.

    1998-04-01

    Frequency-domain optical mammography has been advocated to improve contrast and thus cancer detectability in breast transillumination. To the best of our knowledge, this report provides the first systematic clinical results of a frequency-domain laser scanning mammograph (FLM). The instrument provides monochromatic light at 690 and 810 nm, whose intensity is modulated at 110.0008 MHz, respectively. The breast is scanned by stepwise positioning of source and detector, and amplitude and phase for both wavelengths are measured by a photomultiplier tube using heterodyne detection. Images are formed representing amplitude or phase data on linear gray scales. Furthermore, various algorithms carrying on more than one signal were essayed. Twenty visible cancers out of 25 cancers in the first 59 investigations were analyzed for their quantitative contrast with respect to the whole breast or to defined reference areas. Contrast definitions refer to the signal itself, to the signal noise, or were based on nonparametric comparison. The amplitude signal provides better contrast than the phase signal. Ratio images between red and IR amplitudes gave variable results; in some cases the tumor contrast was canceled. The algorithms to determine (mu) a and (mu) sPRM from amplitude and phase data did not significantly improve upon objective contrast. The N algorithm, using the phase signal to flatten the amplitude signal did significantly improve upon contrast according to contrast definitions 1 and 2, however, did not improve upon nonparametric contrast. Thus, with the current instrumentation, the phase signal is helpful to correct for the complex and variable geometry of the breast. However, an independent informational content for tumor differentiation could not be determined. The flat field algorithm did greatly enhance optical contrast in comparison with amplitude or amplitude ratio images. Further evaluation of FLM will have to be based on the N-algorithm images.

  5. Clinical update on optimal prandial insulin dosing using a refined run-to-run control algorithm.

    PubMed

    Zisser, Howard; Palerm, Cesar C; Bevier, Wendy C; Doyle, Francis J; Jovanovic, Lois

    2009-05-01

    This article provides a clinical update using a novel run-to-run algorithm to optimize prandial insulin dosing based on sparse glucose measurements from the previous day's meals. The objective was to use a refined run-to-run algorithm to calculate prandial insulin-to-carbohydrate ratios (I:CHO) for meals of variable carbohydrate content in subjects with type 1 diabetes (T1DM). The open-labeled, nonrandomized study took place over a 6-week period in a nonprofit research center. Nine subjects with T1DM using continuous subcutaneous insulin infusion participated. Basal insulin rates were optimized using continuous glucose monitoring, with a target fasting blood glucose of 90 mg/dl. Subjects monitored blood glucose concentration at the beginning of the meal and at 60 and 120 minutes after the start of the meal. They were instructed to start meals with blood glucose levels between 70 and 130 mg/dl. Subjects were contacted daily to collect data for the previous 24-hour period and to give them the physician-approved, algorithm-derived I:CHO ratios for the next 24 hours. Subjects calculated the amount of the insulin bolus for each meal based on the corresponding I:CHO and their estimate of the meal's carbohydrate content. One- and 2-hour postprandial glucose concentrations served as the main outcome measures. The mean 1-hour postprandial blood glucose level was 104 +/- 19 mg/dl. The 2-hour postprandial levels (96.5 +/- 18 mg/dl) approached the preprandial levels (90.1 +/- 13 mg/dl). Run-to-run algorithms are able to improve postprandial blood glucose levels in subjects with T1DM. 2009 Diabetes Technology Society.

  6. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    USGS Publications Warehouse

    Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.

    2013-01-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  7. Semiautomated tremor detection using a combined cross-correlation and neural network approach

    NASA Astrophysics Data System (ADS)

    Horstmann, T.; Harrington, R. M.; Cochran, E. S.

    2013-09-01

    Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.

  8. Algorithm Development and Validation for Satellite-Derived Distributions of DOC and CDOM in the US Middle Atlantic Bight

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.

    2007-01-01

    In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of < 9 %for DOC, 20% for a(sub cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.

  9. Waveform Similarity Analysis: A Simple Template Comparing Approach for Detecting and Quantifying Noisy Evoked Compound Action Potentials

    PubMed Central

    Potas, Jason Robert; de Castro, Newton Gonçalves; Maddess, Ted; de Souza, Marcio Nogueira

    2015-01-01

    Experimental electrophysiological assessment of evoked responses from regenerating nerves is challenging due to the typical complex response of events dispersed over various latencies and poor signal-to-noise ratio. Our objective was to automate the detection of compound action potential events and derive their latencies and magnitudes using a simple cross-correlation template comparison approach. For this, we developed an algorithm called Waveform Similarity Analysis. To test the algorithm, challenging signals were generated in vivo by stimulating sural and sciatic nerves, whilst recording evoked potentials at the sciatic nerve and tibialis anterior muscle, respectively, in animals recovering from sciatic nerve transection. Our template for the algorithm was generated based on responses evoked from the intact side. We also simulated noisy signals and examined the output of the Waveform Similarity Analysis algorithm with imperfect templates. Signals were detected and quantified using Waveform Similarity Analysis, which was compared to event detection, latency and magnitude measurements of the same signals performed by a trained observer, a process we called Trained Eye Analysis. The Waveform Similarity Analysis algorithm could successfully detect and quantify simple or complex responses from nerve and muscle compound action potentials of intact or regenerated nerves. Incorrectly specifying the template outperformed Trained Eye Analysis for predicting signal amplitude, but produced consistent latency errors for the simulated signals examined. Compared to the trained eye, Waveform Similarity Analysis is automatic, objective, does not rely on the observer to identify and/or measure peaks, and can detect small clustered events even when signal-to-noise ratio is poor. Waveform Similarity Analysis provides a simple, reliable and convenient approach to quantify latencies and magnitudes of complex waveforms and therefore serves as a useful tool for studying evoked compound action potentials in neural regeneration studies. PMID:26325291

  10. Integrated identification, modeling and control with applications

    NASA Astrophysics Data System (ADS)

    Shi, Guojun

    This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.

  11. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.

    PubMed

    Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D

    2007-06-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.

  12. Quantitative Image Quality and Histogram-Based Evaluations of an Iterative Reconstruction Algorithm at Low-to-Ultralow Radiation Dose Levels: A Phantom Study in Chest CT

    PubMed Central

    Lee, Ki Baek

    2018-01-01

    Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008

  13. Closed Loop Guidance Trade Study for Space Launch System Block-1B Vehicle

    NASA Technical Reports Server (NTRS)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The design of the next evolution of SLS, Block-1B, is well underway. The Block-1B vehicle is more capable overall than Block-1; however, the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) presents a challenge to the Powered Explicit Guidance (PEG) algorithm used by Block-1. To handle the long burn durations (on the order of 1000 seconds) of EUS missions, two algorithms were examined. An alternative algorithm, OPGUID, was introduced, while modifications were made to PEG. A trade study was conducted to select the guidance algorithm for future SLS vehicles. The chosen algorithm needs to support a wide variety of mission operations: ascent burns to LEO, apogee raise burns, trans-lunar injection burns, hyperbolic Earth departure burns, and contingency disposal burns using the Reaction Control System (RCS). Additionally, the algorithm must be able to respond to a single engine failure scenario. Each algorithm was scored based on pre-selected criteria, including insertion accuracy, algorithmic complexity and robustness, extensibility for potential future missions, and flight heritage. Monte Carlo analysis was used to select the final algorithm. This paper covers the design criteria, approach, and results of this trade study, showing impacts and considerations when adapting launch vehicle guidance algorithms to a broader breadth of in-space operations.

  14. Diagnostic accuracy of administrative data algorithms in the diagnosis of osteoarthritis: a systematic review.

    PubMed

    Shrestha, Swastina; Dave, Amish J; Losina, Elena; Katz, Jeffrey N

    2016-07-07

    Administrative health care data are frequently used to study disease burden and treatment outcomes in many conditions including osteoarthritis (OA). OA is a chronic condition with significant disease burden affecting over 27 million adults in the US. There are few studies examining the performance of administrative data algorithms to diagnose OA. The purpose of this study is to perform a systematic review of administrative data algorithms for OA diagnosis; and, to evaluate the diagnostic characteristics of algorithms based on restrictiveness and reference standards. Two reviewers independently screened English-language articles published in Medline, Embase, PubMed, and Cochrane databases that used administrative data to identify OA cases. Each algorithm was classified as restrictive or less restrictive based on number and type of administrative codes required to satisfy the case definition. We recorded sensitivity and specificity of algorithms and calculated positive likelihood ratio (LR+) and positive predictive value (PPV) based on assumed OA prevalence of 0.1, 0.25, and 0.50. The search identified 7 studies that used 13 algorithms. Of these 13 algorithms, 5 were classified as restrictive and 8 as less restrictive. Restrictive algorithms had lower median sensitivity and higher median specificity compared to less restrictive algorithms when reference standards were self-report and American college of Rheumatology (ACR) criteria. The algorithms compared to reference standard of physician diagnosis had higher sensitivity and specificity than those compared to self-reported diagnosis or ACR criteria. Restrictive algorithms are more specific for OA diagnosis and can be used to identify cases when false positives have higher costs e.g. interventional studies. Less restrictive algorithms are more sensitive and suited for studies that attempt to identify all cases e.g. screening programs.

  15. Kerr Reservoir LANDSAT experiment analysis for March 1981

    NASA Technical Reports Server (NTRS)

    Lecroy, S. R. (Principal Investigator)

    1982-01-01

    LANDSAT radiance data were used in an experiment conducted on the waters of Kerr Reservoir to determine if reliable algorithms could be developed that relate water quality parameters to remotely sensed data. A mix of different types of algorithms using the LANDSAT bands was generated to provide a thorough understanding of the relationships among the data involved. Except for secchi depth, the study demonstrated that for the ranges measured, the algorithms that satisfactorily represented the data encompass a mix of linear and nonlinear forms using only one LANDSAT band. Ratioing techniques did not improve the results since the initial design of the experiment minimized the errors against which this procedure is effective. Good correlations were found for total suspended solids, iron, turbidity, and secchi depth. Marginal correlations were discovered for nitrate and tannin + lignin. Quantification maps of Kerr Reservoir are presented for many of the water quality parameters using the developed algorithms.

  16. A joint precoding scheme for indoor downlink multi-user MIMO VLC systems

    NASA Astrophysics Data System (ADS)

    Zhao, Qiong; Fan, Yangyu; Kang, Bochao

    2017-11-01

    In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.

  17. Identifying Defects with Guided Algorithms in Bragg Coherent Diffractive Imaging

    DOE PAGES

    Ulvestad, A.; Nashed, Y.; Beutier, G.; ...

    2017-08-30

    In this study, crystallographic defects such as dislocations can significantly alter material properties and functionality. However, imaging these imperfections during operation remains challenging due to the short length scales involved and the reactive environments of interest. Bragg coherent diffractive imaging (BCDI) has emerged as a powerful tool capable of identifying dislocations, twin domains, and other defects in 3D detail with nanometer spatial resolution within nanocrystals and grains in reactive environments. However, BCDI relies on phase retrieval algorithms that can fail to accurately reconstruct the defect network. Here, we use numerical simulations to explore different guided phase retrieval algorithms for imagingmore » defective crystals using BCDI. We explore different defect types, defect densities, Bragg peaks, and guided algorithm fitness metrics as a function of signal-to-noise ratio. Based on these results, we offer a general prescription for phasing of defective crystals with no a prior knowledge.« less

  18. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  19. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  20. A novel algorithm using an orthotropic material model for topology optimization

    NASA Astrophysics Data System (ADS)

    Tong, Liyong; Luo, Quantian

    2017-09-01

    This article presents a novel algorithm for topology optimization using an orthotropic material model. Based on the virtual work principle, mathematical formulations for effective orthotropic material properties of an element containing two materials are derived. An algorithm is developed for structural topology optimization using four orthotropic material properties, instead of one density or area ratio, in each element as design variables. As an illustrative example, minimum compliance problems for linear and nonlinear structures are solved using the present algorithm in conjunction with the moving iso-surface threshold method. The present numerical results reveal that: (1) chequerboards and single-node connections are not present even without filtering; (2) final topologies do not contain large grey areas even using a unity penalty factor; and (3) the well-known numerical issues caused by low-density material when considering geometric nonlinearity are resolved by eliminating low-density elements in finite element analyses.

  1. Modified Mahalanobis Taguchi System for Imbalance Data Classification

    PubMed Central

    2017-01-01

    The Mahalanobis Taguchi System (MTS) is considered one of the most promising binary classification algorithms to handle imbalance data. Unfortunately, MTS lacks a method for determining an efficient threshold for the binary classification. In this paper, a nonlinear optimization model is formulated based on minimizing the distance between MTS Receiver Operating Characteristics (ROC) curve and the theoretical optimal point named Modified Mahalanobis Taguchi System (MMTS). To validate the MMTS classification efficacy, it has been benchmarked with Support Vector Machines (SVMs), Naive Bayes (NB), Probabilistic Mahalanobis Taguchi Systems (PTM), Synthetic Minority Oversampling Technique (SMOTE), Adaptive Conformal Transformation (ACT), Kernel Boundary Alignment (KBA), Hidden Naive Bayes (HNB), and other improved Naive Bayes algorithms. MMTS outperforms the benchmarked algorithms especially when the imbalance ratio is greater than 400. A real life case study on manufacturing sector is used to demonstrate the applicability of the proposed model and to compare its performance with Mahalanobis Genetic Algorithm (MGA). PMID:28811820

  2. K-means-clustering-based fiber nonlinearity equalization techniques for 64-QAM coherent optical communication system.

    PubMed

    Zhang, Junfeng; Chen, Wei; Gao, Mingyi; Shen, Gangxiang

    2017-10-30

    In this work, we proposed two k-means-clustering-based algorithms to mitigate the fiber nonlinearity for 64-quadrature amplitude modulation (64-QAM) signal, the training-sequence assisted k-means algorithm and the blind k-means algorithm. We experimentally demonstrated the proposed k-means-clustering-based fiber nonlinearity mitigation techniques in 75-Gb/s 64-QAM coherent optical communication system. The proposed algorithms have reduced clustering complexity and low data redundancy and they are able to quickly find appropriate initial centroids and select correctly the centroids of the clusters to obtain the global optimal solutions for large k value. We measured the bit-error-ratio (BER) performance of 64-QAM signal with different launched powers into the 50-km single mode fiber and the proposed techniques can greatly mitigate the signal impairments caused by the amplified spontaneous emission noise and the fiber Kerr nonlinearity and improve the BER performance.

  3. A Novel Speed Compensation Method for ISAR Imaging with Low SNR

    PubMed Central

    Liu, Yongxiang; Zhang, Shuanghui; Zhu, Dekang; Li, Xiang

    2015-01-01

    In this paper, two novel speed compensation algorithms for ISAR imaging under a low signal-to-noise ratio (SNR) condition have been proposed, which are based on the cubic phase function (CPF) and the integrated cubic phase function (ICPF), respectively. These two algorithms can estimate the speed of the target from the wideband radar echo directly, which breaks the limitation of speed measuring in a radar system. With the utilization of non-coherent accumulation, the ICPF-based speed compensation algorithm is robust to noise and can meet the requirement of speed compensation for ISAR imaging under a low SNR condition. Moreover, a fast searching implementation strategy, which consists of coarse search and precise search, has been introduced to decrease the computational burden of speed compensation based on CPF and ICPF. Experimental results based on radar data validate the effectiveness of the proposed algorithms. PMID:26225980

  4. A novel pulse compression algorithm for frequency modulated active thermography using band-pass filter

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Roy, Deboshree; Tuli, Suneet

    2017-05-01

    This paper proposes a novel pulse compression algorithm, in the context of frequency modulated thermal wave imaging. The compression filter is derived from a predefined reference pixel in a recorded video, which contains direct measurement of the excitation signal alongside the thermal image of a test piece. The filter causes all the phases of the constituent frequencies to be adjusted to nearly zero value, so that on reconstruction a pulse is obtained. Further, due to band-limited nature of the excitation, signal-to-noise ratio is improved by suppressing out-of-band noise. The result is similar to that of a pulsed thermography experiment, although the peak power is drastically reduced. The algorithm is successfully demonstrated on mild steel and carbon fibre reference samples. Objective comparisons of the proposed pulse compression algorithm with the existing techniques are presented.

  5. Scanning electron microscope fine tuning using four-bar piezoelectric actuated mechanism

    NASA Astrophysics Data System (ADS)

    Hatamleh, Khaled S.; Khasawneh, Qais A.; Al-Ghasem, Adnan; Jaradat, Mohammad A.; Sawaqed, Laith; Al-Shabi, Mohammad

    2018-01-01

    Scanning Electron Microscopes are extensively used for accurate micro/nano images exploring. Several strategies have been proposed to fine tune those microscopes in the past few years. This work presents a new fine tuning strategy of a scanning electron microscope sample table using four bar piezoelectric actuated mechanisms. The introduced paper presents an algorithm to find all possible inverse kinematics solutions of the proposed mechanism. In addition, another algorithm is presented to search for the optimal inverse kinematic solution. Both algorithms are used simultaneously by means of a simulation study to fine tune a scanning electron microscope sample table through a pre-specified circular or linear path of motion. Results of the study shows that, proposed algorithms were able to minimize the power required to drive the piezoelectric actuated mechanism by a ratio of 97.5% for all simulated paths of motion when compared to general non-optimized solution.

  6. A fast hybrid algorithm combining regularized motion tracking and predictive search for reducing the occurrence of large displacement errors.

    PubMed

    Jiang, Jingfeng; Hall, Timothy J

    2011-04-01

    A hybrid approach that inherits both the robustness of the regularized motion tracking approach and the efficiency of the predictive search approach is reported. The basic idea is to use regularized speckle tracking to obtain high-quality seeds in an explorative search that can be used in the subsequent intelligent predictive search. The performance of the hybrid speckle-tracking algorithm was compared with three published speckle-tracking methods using in vivo breast lesion data. We found that the hybrid algorithm provided higher displacement quality metric values, lower root mean squared errors compared with a locally smoothed displacement field, and higher improvement ratios compared with the classic block-matching algorithm. On the basis of these comparisons, we concluded that the hybrid method can further enhance the accuracy of speckle tracking compared with its real-time counterparts, at the expense of slightly higher computational demands. © 2011 IEEE

  7. Target surface finding using 3D SAR data

    NASA Astrophysics Data System (ADS)

    Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.

    2005-05-01

    Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.

  8. Fluid preconditioning for Newton–Krylov-based, fully implicit, electrostatic particle-in-cell simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G., E-mail: gchen@lanl.gov; Chacón, L.; Leibs, C.A.

    2014-02-01

    A recent proof-of-principle study proposes an energy- and charge-conserving, nonlinearly implicit electrostatic particle-in-cell (PIC) algorithm in one dimension [9]. The algorithm in the reference employs an unpreconditioned Jacobian-free Newton–Krylov method, which ensures nonlinear convergence at every timestep (resolving the dynamical timescale of interest). Kinetic enslavement, which is one key component of the algorithm, not only enables fully implicit PIC as a practical approach, but also allows preconditioning the kinetic solver with a fluid approximation. This study proposes such a preconditioner, in which the linearized moment equations are closed with moments computed from particles. Effective acceleration of the linear GMRES solvemore » is demonstrated, on both uniform and non-uniform meshes. The algorithm performance is largely insensitive to the electron–ion mass ratio. Numerical experiments are performed on a 1D multi-scale ion acoustic wave test problem.« less

  9. Identifying Defects with Guided Algorithms in Bragg Coherent Diffractive Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulvestad, A.; Nashed, Y.; Beutier, G.

    In this study, crystallographic defects such as dislocations can significantly alter material properties and functionality. However, imaging these imperfections during operation remains challenging due to the short length scales involved and the reactive environments of interest. Bragg coherent diffractive imaging (BCDI) has emerged as a powerful tool capable of identifying dislocations, twin domains, and other defects in 3D detail with nanometer spatial resolution within nanocrystals and grains in reactive environments. However, BCDI relies on phase retrieval algorithms that can fail to accurately reconstruct the defect network. Here, we use numerical simulations to explore different guided phase retrieval algorithms for imagingmore » defective crystals using BCDI. We explore different defect types, defect densities, Bragg peaks, and guided algorithm fitness metrics as a function of signal-to-noise ratio. Based on these results, we offer a general prescription for phasing of defective crystals with no a prior knowledge.« less

  10. An open-source framework for stress-testing non-invasive foetal ECG extraction algorithms.

    PubMed

    Andreotti, Fernando; Behar, Joachim; Zaunseder, Sebastian; Oster, Julien; Clifford, Gari D

    2016-05-01

    Over the past decades, many studies have been published on the extraction of non-invasive foetal electrocardiogram (NI-FECG) from abdominal recordings. Most of these contributions claim to obtain excellent results in detecting foetal QRS (FQRS) complexes in terms of location. A small subset of authors have investigated the extraction of morphological features from the NI-FECG. However, due to the shortage of available public databases, the large variety of performance measures employed and the lack of open-source reference algorithms, most contributions cannot be meaningfully assessed. This article attempts to address these issues by presenting a standardised methodology for stress testing NI-FECG algorithms, including absolute data, as well as extraction and evaluation routines. To that end, a large database of realistic artificial signals was created, totaling 145.8 h of multichannel data and over one million FQRS complexes. An important characteristic of this dataset is the inclusion of several non-stationary events (e.g. foetal movements, uterine contractions and heart rate fluctuations) that are critical for evaluating extraction routines. To demonstrate our testing methodology, three classes of NI-FECG extraction algorithms were evaluated: blind source separation (BSS), template subtraction (TS) and adaptive methods (AM). Experiments were conducted to benchmark the performance of eight NI-FECG extraction algorithms on the artificial database focusing on: FQRS detection and morphological analysis (foetal QT and T/QRS ratio). The overall median FQRS detection accuracies (i.e. considering all non-stationary events) for the best performing methods in each group were 99.9% for BSS, 97.9% for AM and 96.0% for TS. Both FQRS detections and morphological parameters were shown to heavily depend on the extraction techniques and signal-to-noise ratio. Particularly, it is shown that their evaluation in the source domain, obtained after using a BSS technique, should be avoided. Data, extraction algorithms and evaluation routines were released as part of the fecgsyn toolbox on Physionet under an GNU GPL open-source license. This contribution provides a standard framework for benchmarking and regulatory testing of NI-FECG extraction algorithms.

  11. SU-F-T-600: Influence of Acuros XB and AAA Dose Calculation Algorithms On Plan Quality Metrics and Normal Lung Doses in Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaparpalvi, R; Mynampati, D; Kuo, H

    Purpose: To study the influence of superposition-beam model (AAA) and determinant-photon transport-solver (Acuros XB) dose calculation algorithms on the treatment plan quality metrics and on normal lung dose in Lung SBRT. Methods: Treatment plans of 10 Lung SBRT patients were randomly selected. Patients were prescribed to a total dose of 50-54Gy in 3–5 fractions (10?5 or 18?3). Doses were optimized accomplished with 6-MV using 2-arcs (VMAT). Doses were calculated using AAA algorithm with heterogeneity correction. For each plan, plan quality metrics in the categories- coverage, homogeneity, conformity and gradient were quantified. Repeat dosimetry for these AAA treatment plans was performedmore » using AXB algorithm with heterogeneity correction for same beam and MU parameters. Plan quality metrics were again evaluated and compared with AAA plan metrics. For normal lung dose, V{sub 20} and V{sub 5} to (Total lung- GTV) were evaluated. Results: The results are summarized in Supplemental Table 1. PTV volume was mean 11.4 (±3.3) cm{sup 3}. Comparing RTOG 0813 protocol criteria for conformality, AXB plans yielded on average, similar PITV ratio (individual PITV ratio differences varied from −9 to +15%), reduced target coverage (−1.6%) and increased R50% (+2.6%). Comparing normal lung doses, the lung V{sub 20} (+3.1%) and V{sub 5} (+1.5%) were slightly higher for AXB plans compared to AAA plans. High-dose spillage ((V105%PD - PTV)/ PTV) was slightly lower for AXB plans but the % low dose spillage (D2cm) was similar between the two calculation algorithms. Conclusion: AAA algorithm overestimates lung target dose. Routinely adapting to AXB for dose calculations in Lung SBRT planning may improve dose calculation accuracy, as AXB based calculations have been shown to be closer to Monte Carlo based dose predictions in accuracy and with relatively faster computational time. For clinical practice, revisiting dose-fractionation in Lung SBRT to correct for dose overestimates attributable to algorithm may very well be warranted.« less

  12. Effectiveness and safety of procalcitonin-guided antibiotic therapy in lower respiratory tract infections in "real life": an international, multicenter poststudy survey (ProREAL).

    PubMed

    Albrich, Werner C; Dusemund, Frank; Bucher, Birgit; Meyer, Stefan; Thomann, Robert; Kühn, Felix; Bassetti, Stefano; Sprenger, Martin; Bachli, Esther; Sigrist, Thomas; Schwietert, Martin; Amin, Devendra; Hausfater, Pierre; Carre, Eric; Gaillat, Jacques; Schuetz, Philipp; Regez, Katharina; Bossart, Rita; Schild, Ursula; Mueller, Beat

    2012-05-14

    In controlled studies, procalcitonin (PCT) has safely and effectively reduced antibiotic drug use for lower respiratory tract infections (LRTIs). However, controlled trial data may not reflect real life. We performed an observational quality surveillance in 14 centers in Switzerland, France, and the United States. Consecutive adults with LRTI presenting to emergency departments or outpatient offices were enrolled and registered on a website, which provided a previously published PCT algorithm for antibiotic guidance. The primary end point was duration of antibiotic therapy within 30 days. Of 1759 patients, 86.4% had a final diagnosis of LRTI (community-acquired pneumonia, 53.7%; acute exacerbation of chronic obstructive pulmonary disease, 17.1%; and bronchitis, 14.4%). Algorithm compliance overall was 68.2%, with differences between diagnoses (bronchitis, 81.0%; AECOPD, 70.1%; and community-acquired pneumonia, 63.7%; P < .001), outpatients (86.1%) and inpatients (65.9%) (P < .001), algorithm-experienced (82.5%) and algorithm-naive (60.1%) centers (P < .001), and countries (Switzerland, 75.8%; France, 73.5%; and the United States, 33.5%; P < .001). After multivariate adjustment, antibiotic therapy duration was significantly shorter if the PCT algorithm was followed compared with when it was overruled (5.9 vs 7.4 days; difference, -1.51 days; 95% CI, -2.04 to -0.98; P < .001). No increase was noted in the risk of the combined adverse outcome end point within 30 days of follow-up when the PCT algorithm was followed regarding withholding antibiotics on hospital admission (adjusted odds ratio, 0.83; 95% CI, 0.44 to 1.55; P = .56) and regarding early cessation of antibiotics (adjusted odds ratio, 0.61; 95% CI, 0.36 to 1.04; P = .07). This study validates previous results from controlled trials in real-life conditions and demonstrates that following a PCT algorithm effectively reduces antibiotic use without increasing the risk of complications. Preexisting differences in antibiotic prescribing affect compliance with antibiotic stewardship efforts. isrctn.org Identifier: ISRCTN40854211.

  13. Design of artificial neural networks using a genetic algorithm to predict collection efficiency in venturi scrubbers.

    PubMed

    Taheri, Mahboobeh; Mohebbi, Ali

    2008-08-30

    In this study, a new approach for the auto-design of neural networks, based on a genetic algorithm (GA), has been used to predict collection efficiency in venturi scrubbers. The experimental input data, including particle diameter, throat gas velocity, liquid to gas flow rate ratio, throat hydraulic diameter, pressure drop across the venturi scrubber and collection efficiency as an output, have been used to create a GA-artificial neural network (ANN) model. The testing results from the model are in good agreement with the experimental data. Comparison of the results of the GA optimized ANN model with the results from the trial-and-error calibrated ANN model indicates that the GA-ANN model is more efficient. Finally, the effects of operating parameters such as liquid to gas flow rate ratio, throat gas velocity, and particle diameter on collection efficiency were determined.

  14. Directional ratio based on parabolic molecules and its application to the analysis of tubular structures

    NASA Astrophysics Data System (ADS)

    Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos

    2015-09-01

    As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.

  15. Mortality investigation of workers in an electromagnetic pulse test program.

    PubMed

    Muhm, J M

    1992-03-01

    A standardized mortality ratio study of 304 male employees of an electromagnetic pulse (EMP) test program was conducted. Outcomes were ascertained by two methods: the World Health Organization's underlying cause of death algorithm; and the National Center for Health Statistics' algorithm to identify multiple listed causes of death. In the 3362 person-years of follow-up, there was one underlying cause of death due to leukemia compared with with 0.2 expected (standard mortality ratio [SMR] = 437, 95% confidence interval [CI] = 11-2433), and two multiple listed causes of death due to leukemia compared with 0.3 expected (SMR = 775, 95% CI = 94-2801). Although the study suggested an association between death due to leukemia and employment in the EMP test program, firm conclusions could not be drawn because of limitations of the study. The findings warrant further investigation in an independent cohort.

  16. Measurement of the ratio of inclusive jet cross sections using the anti-kt algorithm with radius parameters R = 0.5 and 0.7 in pp collisions at $$\\sqrt{s}$$ = 7 TeV

    DOE PAGES

    Chatrchyan, Serguei

    2014-10-16

    Measurements of the inclusive jet cross section with the anti-kt clustering algorithm are presented for two radius parameters, R=0.5 and 0.7. They are based on data from LHC proton-proton collisions atmore » $$\\sqrt{s}$$ = 7 TeV corresponding to an integrated luminosity of 5.0 inverse femtobarns collected with the CMS detector in 2011. The ratio of these two measurements is obtained as a function of the rapidity and transverse momentum of the jets. Significant discrepancies are found comparing the data to leading-order simulations and to fixed-order calculations at next-to-leading order, corrected for nonperturbative effects, whereas simulations with next-to-leading-order matrix elements matched to parton showers describe the data best.« less

  17. Real part of refractive index measurement approach for absorbing liquid.

    PubMed

    Liu, Hao; Ye, Junwei; Yang, Kecheng; Xia, Min; Guo, Wenping; Li, Wei

    2015-07-01

    An algorithm based on use of a reflected refractometer to measure the real part of the refractive index (RI) for an absorbing liquid is presented. The absorption of liquid will blur the division between bright and dark regions on a Fresnel reflective curve. However, the reflective ratio at some incident angles that are less than the critical angle have little sensitivity to absorbability. Unlike common methods that extract RI from reflectivity in critical angle vicinity, the presented method acquires the real RI from reflective ratio at a subcritical angle. Supported by the theoretical analysis and experimental results on a reflected refractometer, we have achieved accuracy better than 3×10(-4) RIU on ink samples with absorption coefficient around 300  cm(-1). Additional tests on Alizarin yellow GG solutions prove that the subcritical algorithm is feasible and of high accuracy.

  18. Site-percolation threshold of carbon nanotube fibers-Fast inspection of percolation with Markov stochastic theory

    NASA Astrophysics Data System (ADS)

    Xu, Fangbo; Xu, Zhiping; Yakobson, Boris I.

    2014-08-01

    We present a site-percolation model based on a modified FCC lattice, as well as an efficient algorithm of inspecting percolation which takes advantage of the Markov stochastic theory, in order to study the percolation threshold of carbon nanotube (CNT) fibers. Our Markov-chain based algorithm carries out the inspection of percolation by performing repeated sparse matrix-vector multiplications, which allows parallelized computation to accelerate the inspection for a given configuration. With this approach, we determine that the site-percolation transition of CNT fibers occurs at pc=0.1533±0.0013, and analyze the dependence of the effective percolation threshold (corresponding to 0.5 percolation probability) on the length and the aspect ratio of a CNT fiber on a finite-size-scaling basis. We also discuss the aspect ratio dependence of percolation probability with various values of p (not restricted to pc).

  19. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  20. Study on a low complexity adaptive modulation algorithm in OFDM-ROF system with sub-carrier grouping technology

    NASA Astrophysics Data System (ADS)

    Liu, Chong-xin; Liu, Bo; Zhang, Li-jia; Xin, Xiang-jun; Tian, Qing-hua; Tian, Feng; Wang, Yong-jun; Rao, Lan; Mao, Yaya; Li, Deng-ao

    2018-01-01

    During the last decade, the orthogonal frequency division multiplexing radio-over-fiber (OFDM-ROF) system with adaptive modulation technology is of great interest due to its capability of raising the spectral efficiency dramatically, reducing the effects of fiber link or wireless channel, and improving the communication quality. In this study, according to theoretical analysis of nonlinear distortion and frequency selective fading on the transmitted signal, a low-complexity adaptive modulation algorithm is proposed in combination with sub-carrier grouping technology. This algorithm achieves the optimal performance of the system by calculating the average combined signal-to-noise ratio of each group and dynamically adjusting the origination modulation format according to the preset threshold and user's requirements. At the same time, this algorithm takes the sub-carrier group as the smallest unit in the initial bit allocation and the subsequent bit adjustment. So, the algorithm complexity is only 1 /M (M is the number of sub-carriers in each group) of Fischer algorithm, which is much smaller than many classic adaptive modulation algorithms, such as Hughes-Hartogs algorithm, Chow algorithm, and is in line with the development direction of green and high speed communication. Simulation results show that the performance of OFDM-ROF system with the improved algorithm is much better than those without adaptive modulation, and the BER of the former achieves 10e1 to 10e2 times lower than the latter when SNR values gets larger. We can obtain that this low complexity adaptive modulation algorithm is extremely useful for the OFDM-ROF system.

  1. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  2. Nonlocal variational model and filter algorithm to remove multiplicative noise

    NASA Astrophysics Data System (ADS)

    Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi

    2010-07-01

    The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.

  3. Workflow as a Service in the Cloud: Architecture and Scheduling Algorithms

    PubMed Central

    Wang, Jianwu; Korambath, Prakashan; Altintas, Ilkay; Davis, Jim; Crawl, Daniel

    2017-01-01

    With more and more workflow systems adopting cloud as their execution environment, it becomes increasingly challenging on how to efficiently manage various workflows, virtual machines (VMs) and workflow execution on VM instances. To make the system scalable and easy-to-extend, we design a Workflow as a Service (WFaaS) architecture with independent services. A core part of the architecture is how to efficiently respond continuous workflow requests from users and schedule their executions in the cloud. Based on different targets, we propose four heuristic workflow scheduling algorithms for the WFaaS architecture, and analyze the differences and best usages of the algorithms in terms of performance, cost and the price/performance ratio via experimental studies. PMID:29399237

  4. Remote sensing of oligotrophic waters: model divergence at low chlorophyll concentrations.

    PubMed

    Mehrtens, Hela; Martin, Thomas

    2002-11-20

    The performance of the OC2 Sea-viewing Wide Field-of-view Sensor (SeaWiFS) algorithm based on 490- and 555-nm water-leaving radiances at low chlorophyll contents is compared with those of semianalytical models and a Monte Carlo radiative transfer model. We introduce our model, which uses two particle phase functions and scattering coefficient parameterizations to achieve a backscattering ratio that varies with chlorophyll concentration. We discuss the various parameterizations and compare them with existent measurements. The SeaWiFS algorithm could be confirmed within an accuracy of 35% over a chlorophyll range from 0.1 to 1 mg m(-3), whereas for lower chlorophyll concentrations we found a significant overestimation of the OC2 algorithm.

  5. Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Irwin, Ryan W.; Tinker, Michael L.

    2005-01-01

    Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.

  6. Maximal clique enumeration with data-parallel primitives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessley, Brenton; Perciano, Talita; Mathai, Manish

    The enumeration of all maximal cliques in an undirected graph is a fundamental problem arising in several research areas. We consider maximal clique enumeration on shared-memory, multi-core architectures and introduce an approach consisting entirely of data-parallel operations, in an effort to achieve efficient and portable performance across different architectures. We study the performance of the algorithm via experiments varying over benchmark graphs and architectures. Overall, we observe that our algorithm achieves up to a 33-time speedup and 9-time speedup over state-of-the-art distributed and serial algorithms, respectively, for graphs with higher ratios of maximal cliques to total cliques. Further, we attainmore » additional speedups on a GPU architecture, demonstrating the portable performance of our data-parallel design.« less

  7. A 3/2-Approximation Algorithm for Multiple Depot Multiple Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Xu, Zhou; Rodrigues, Brian

    As an important extension of the classical traveling salesman problem (TSP), the multiple depot multiple traveling salesman problem (MDMTSP) is to minimize the total length of a collection of tours for multiple vehicles to serve all the customers, where each vehicle must start or stay at its distinct depot. Due to the gap between the existing best approximation ratios for the TSP and for the MDMTSP in literature, which are 3/2 and 2, respectively, it is an open question whether or not a 3/2-approximation algorithm exists for the MDMTSP. We have partially addressed this question by developing a 3/2-approximation algorithm, which runs in polynomial time when the number of depots is a constant.

  8. On securing wireless sensor network--novel authentication scheme against DOS attacks.

    PubMed

    Raja, K Nirmal; Beno, M Marsaline

    2014-10-01

    Wireless sensor networks are generally deployed for collecting data from various environments. Several applications specific sensor network cryptography algorithms have been proposed in research. However WSN's has many constrictions, including low computation capability, less memory, limited energy resources, vulnerability to physical capture, which enforce unique security challenges needs to make a lot of improvements. This paper presents a novel security mechanism and algorithm for wireless sensor network security and also an application of this algorithm. The proposed scheme is given to strong authentication against Denial of Service Attacks (DOS). The scheme is simulated using network simulator2 (NS2). Then this scheme is analyzed based on the network packet delivery ratio and found that throughput has improved.

  9. Towards automatic patient selection for chemotherapy in colorectal cancer trials

    NASA Astrophysics Data System (ADS)

    Wright, Alexander; Magee, Derek; Quirke, Philip; Treanor, Darren E.

    2014-03-01

    A key factor in the prognosis of colorectal cancer, and its response to chemoradiotherapy, is the ratio of cancer cells to surrounding tissue (the so called tumour:stroma ratio). Currently tumour:stroma ratio is calculated manually, by examining H&E stained slides and counting the proportion of area of each. Virtual slides facilitate this analysis by allowing pathologists to annotate areas of tumour on a given digital slide image, and in-house developed stereometry tools mark random, systematic points on the slide, known as spots. These spots are examined and classified by the pathologist. Typical analyses require a pathologist to score at least 300 spots per tumour. This is a time consuming (10- 60 minutes per case) and laborious task for the pathologist and automating this process is highly desirable. Using an existing dataset of expert-classified spots from one colorectal cancer clinical trial, an automated tumour:stroma detection algorithm has been trained and validated. Each spot is extracted as an image patch, and then processed for feature extraction, identifying colour, texture, stain intensity and object characteristics. These features are used as training data for a random forest classification algorithm, and validated against unseen image patches. This process was repeated for multiple patch sizes. Over 82,000 such patches have been used, and results show an accuracy of 79%, depending on image patch size. A second study examining contextual requirements for pathologist scoring was conducted and indicates that further analysis of structures within each image patch is required in order to improve algorithm accuracy.

  10. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  11. Evaluation of genotype-guided acenocoumarol dosing algorithms in Russian patients.

    PubMed

    Sychev, Dmitriy Alexeyevich; Rozhkov, Aleksandr Vladimirovich; Ananichuk, Anna Viktorovna; Kazakov, Ruslan Evgenyevich

    2017-05-24

    Acenocoumarol dose is normally determined via step-by-step adjustment process based on International Normalized Ratio (INR) measurements. During this time, the risk of adverse reactions is especially high. Several genotype-based acenocoumarol dosing algorithms have been created to predict ideal doses at the start of anticoagulant therapy. Nine dosing algorithms were selected through a literature search. These were evaluated using a cohort of 63 patients with atrial fibrillation receiving acenocoumarol therapy. None of the existing algorithms could predict the ideal acenocoumarol dose in 50% of Russian patients. The Wolkanin-Bartnik algorithtm based on European population was the best-performing one with the highest correlation values (r=0.397), mean absolute error (MAE) 0.82 (±0.61). EU-PACT also managed to give an estimate within the ideal range in 43% of the cases. The two least accurate results were yielded by the Indian population-based algorithms. Among patients receiving amiodarone, algorithms by Schie and Tong proved to be the most effective with the MAE of 0.48±0.42 mg/day and 0.56±0.31 mg/day, respectively. Patient ethnicity and amiodarone intake are factors that must be considered when building future algorithms. Further research is required to find the perfect dosing formula of acenocoumarol maintenance doses in Russian patients.

  12. A GPU-accelerated 3D Coupled Sub-sample Estimation Algorithm for Volumetric Breast Strain Elastography

    PubMed Central

    Peng, Bo; Wang, Yuqi; Hall, Timothy J; Jiang, Jingfeng

    2017-01-01

    Our primary objective of this work was to extend a previously published 2D coupled sub-sample tracking algorithm for 3D speckle tracking in the framework of ultrasound breast strain elastography. In order to overcome heavy computational cost, we investigated the use of a graphic processing unit (GPU) to accelerate the 3D coupled sub-sample speckle tracking method. The performance of the proposed GPU implementation was tested using a tissue-mimicking (TM) phantom and in vivo breast ultrasound data. The performance of this 3D sub-sample tracking algorithm was compared with the conventional 3D quadratic sub-sample estimation algorithm. On the basis of these evaluations, we concluded that the GPU implementation of this 3D sub-sample estimation algorithm can provide high-quality strain data (i.e. high correlation between the pre- and the motion-compensated post-deformation RF echo data and high contrast-to-noise ratio strain images), as compared to the conventional 3D quadratic sub-sample algorithm. Using the GPU implementation of the 3D speckle tracking algorithm, volumetric strain data can be achieved relatively fast (approximately 20 seconds per volume [2.5 cm × 2.5 cm × 2.5 cm]). PMID:28166493

  13. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    PubMed

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  14. A new unequal-weighted triple-frequency first order ionosphere correction algorithm and its application in COMPASS

    NASA Astrophysics Data System (ADS)

    Liu, WenXiang; Mou, WeiHua; Wang, FeiXue

    2012-03-01

    As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.

  15. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  16. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  17. Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia

    NASA Technical Reports Server (NTRS)

    Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve

    2014-01-01

    A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.

  18. Experimental Analysis of Algorithms.

    DTIC Science & Technology

    1987-12-01

    solution ratio in the Bin Packing study) were suggested by theoretical analysis. Gnanadesikan and Gustafson [16] note that significantly different sizes...34’ [16] M. Gnanadesikan and H. W. Gustafson. * Properties of Performance Measures. 1985. Summary of poster presentation. Gnanadesikan is at Farleigh

  19. UV Reconstruction Algorithm And Diurnal Cycle Variability

    NASA Astrophysics Data System (ADS)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  20. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    PubMed

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  1. Dynamic virtual optical network embedding in spectral and spatial domains over elastic optical networks with multicore fibers

    NASA Astrophysics Data System (ADS)

    Zhu, Ruijie; Zhao, Yongli; Yang, Hui; Tan, Yuanlong; Chen, Haoran; Zhang, Jie; Jue, Jason P.

    2016-08-01

    Network virtualization can eradicate the ossification of the infrastructure and stimulate innovation of new network architectures and applications. Elastic optical networks (EONs) are ideal substrate networks for provisioning flexible virtual optical network (VON) services. However, as network traffic continues to increase exponentially, the capacity of EONs will reach the physical limitation soon. To further increase network flexibility and capacity, the concept of EONs is extended into the spatial domain. How to map the VON onto substrate networks by thoroughly using the spectral and spatial resources is extremely important. This process is called VON embedding (VONE).Considering the two kinds of resources at the same time during the embedding process, we propose two VONE algorithms, the adjacent link embedding algorithm (ALEA) and the remote link embedding algorithm (RLEA). First, we introduce a model to solve the VONE problem. Then we design the embedding ability measurement of network elements. Based on the network elements' embedding ability, two VONE algorithms were proposed. Simulation results show that the proposed VONE algorithms could achieve better performance than the baseline algorithm in terms of blocking probability and revenue-to-cost ratio.

  2. Prediction of cancer proteins by integrating protein interaction, domain frequency, and domain interaction data using machine learning algorithms.

    PubMed

    Huang, Chien-Hung; Peng, Huai-Shun; Ng, Ka-Lok

    2015-01-01

    Many proteins are known to be associated with cancer diseases. It is quite often that their precise functional role in disease pathogenesis remains unclear. A strategy to gain a better understanding of the function of these proteins is to make use of a combination of different aspects of proteomics data types. In this study, we extended Aragues's method by employing the protein-protein interaction (PPI) data, domain-domain interaction (DDI) data, weighted domain frequency score (DFS), and cancer linker degree (CLD) data to predict cancer proteins. Performances were benchmarked based on three kinds of experiments as follows: (I) using individual algorithm, (II) combining algorithms, and (III) combining the same classification types of algorithms. When compared with Aragues's method, our proposed methods, that is, machine learning algorithm and voting with the majority, are significantly superior in all seven performance measures. We demonstrated the accuracy of the proposed method on two independent datasets. The best algorithm can achieve a hit ratio of 89.4% and 72.8% for lung cancer dataset and lung cancer microarray study, respectively. It is anticipated that the current research could help understand disease mechanisms and diagnosis.

  3. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  4. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  5. Prediction of Cancer Proteins by Integrating Protein Interaction, Domain Frequency, and Domain Interaction Data Using Machine Learning Algorithms

    PubMed Central

    2015-01-01

    Many proteins are known to be associated with cancer diseases. It is quite often that their precise functional role in disease pathogenesis remains unclear. A strategy to gain a better understanding of the function of these proteins is to make use of a combination of different aspects of proteomics data types. In this study, we extended Aragues's method by employing the protein-protein interaction (PPI) data, domain-domain interaction (DDI) data, weighted domain frequency score (DFS), and cancer linker degree (CLD) data to predict cancer proteins. Performances were benchmarked based on three kinds of experiments as follows: (I) using individual algorithm, (II) combining algorithms, and (III) combining the same classification types of algorithms. When compared with Aragues's method, our proposed methods, that is, machine learning algorithm and voting with the majority, are significantly superior in all seven performance measures. We demonstrated the accuracy of the proposed method on two independent datasets. The best algorithm can achieve a hit ratio of 89.4% and 72.8% for lung cancer dataset and lung cancer microarray study, respectively. It is anticipated that the current research could help understand disease mechanisms and diagnosis. PMID:25866773

  6. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  7. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network

    PubMed Central

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  8. Improving chlorophyll-a retrievals and cross-sensor consistency through the OCI algorithm concept

    NASA Astrophysics Data System (ADS)

    Feng, L.; Hu, C.; Lee, Z.; Franz, B. A.

    2016-02-01

    Abstract: The recently developed band-subtraction based OCI chlorophyll-a algorithm is more tolerant than the band-ratio OCx algorithms to errors from atmospheric correction and other sources in oligotrophic oceans (Chl ≤ 0.25 mg m-3), and it has been implemented by NASA as the default algorithm to produce global Chl data from all ocean color missions. However, two areas still require improvements in its current implementation. Firstly, the originally proposed algorithm switch between oligotrophic and more productive waters has been changed from 0.25 - 0.3 mg m-3 to 0.15 - 0.2 mg m-3 to account for the observed discontinuity in data statistics. Additionally, the algorithm does not account for variable proportions of colored dissolved organic matter (CDOM) in different ocean basins. Here, new step-wise regression equations with fine-tuned regression coefficients are used to improve raise the algorithm switch zone and to improve data statistics as well as retrieval accuracy. A new CDOM index (CDI) based on three spectral bands (412, 443 and 490 nm) is used as a weighting factor to adjust the algorithm for the optical disparities between different oceans. The updated Chl OCI algorithm is then evaluated for its overall accuracy using field observations through the SeaBASS data archive, and for its cross-sensor consistency using multi-sensor observations over the global oceans. Keywords: Chlorophyll-a, Remote sensing, Ocean color, OCI, OCx, CDOM, MODIS, SeaWiFS, VIIRS

  9. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.

    PubMed

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-12-01

    Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.

  10. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  11. Quantitative morphometric analysis of hepatocellular carcinoma: development of a programmed algorithm and preliminary application.

    PubMed

    Yap, Felix Y; Bui, James T; Knuttinen, M Grace; Walzer, Natasha M; Cotler, Scott J; Owens, Charles A; Berkes, Jamie L; Gaba, Ron C

    2013-01-01

    The quantitative relationship between tumor morphology and malignant potential has not been explored in liver tumors. We designed a computer algorithm to analyze shape features of hepatocellular carcinoma (HCC) and tested feasibility of morphologic analysis. Cross-sectional images from 118 patients diagnosed with HCC between 2007 and 2010 were extracted at the widest index tumor diameter. The tumor margins were outlined, and point coordinates were input into a MATLAB (MathWorks Inc., Natick, Massachusetts, USA) algorithm. Twelve shape descriptors were calculated per tumor: the compactness, the mean radial distance (MRD), the RD standard deviation (RDSD), the RD area ratio (RDAR), the zero crossings, entropy, the mean Feret diameter (MFD), the Feret ratio, the convex hull area (CHA) and perimeter (CHP) ratios, the elliptic compactness (EC), and the elliptic irregularity (EI). The parameters were correlated with the levels of alpha-fetoprotein (AFP) as an indicator of tumor aggressiveness. The quantitative morphometric analysis was technically successful in all cases. The mean parameters were as follows: compactness 0.88±0.086, MRD 0.83±0.056, RDSD 0.087±0.037, RDAR 0.045±0.023, zero crossings 6±2.2, entropy 1.43±0.16, MFD 4.40±3.14 cm, Feret ratio 0.78±0.089, CHA 0.98±0.027, CHP 0.98±0.030, EC 0.95±0.043, and EI 0.95±0.023. MFD and RDAR provided the widest value range for the best shape discrimination. The larger tumors were less compact, more concave, and less ellipsoid than the smaller tumors (P < 0.0001). AFP-producing tumors displayed greater morphologic irregularity based on several parameters, including compactness, MRD, RDSD, RDAR, entropy, and EI (P < 0.05 for all). Computerized HCC image analysis using shape descriptors is technically feasible. Aggressively growing tumors have wider diameters and more irregular margins. Future studies will determine further clinical applications for this morphologic analysis.

  12. Increasing critical sensitivity of the Load/Unload Response Ratio before large earthquakes with identified stress accumulation pattern

    NASA Astrophysics Data System (ADS)

    Yu, Huai-zhong; Shen, Zheng-kang; Wan, Yong-ge; Zhu, Qing-yong; Yin, Xiang-chu

    2006-12-01

    The Load/Unload Response Ratio (LURR) method is proposed for short-to-intermediate-term earthquake prediction [Yin, X.C., Chen, X.Z., Song, Z.P., Yin, C., 1995. A New Approach to Earthquake Prediction — The Load/Unload Response Ratio (LURR) Theory, Pure Appl. Geophys., 145, 701-715]. This method is based on measuring the ratio between Benioff strains released during the time periods of loading and unloading, corresponding to the Coulomb Failure Stress change induced by Earth tides on optimally oriented faults. According to the method, the LURR time series usually climb to an anomalously high peak prior to occurrence of a large earthquake. Previous studies have indicated that the size of critical seismogenic region selected for LURR measurements has great influence on the evaluation of LURR. In this study, we replace the circular region usually adopted in LURR practice with an area within which the tectonic stress change would mostly affect the Coulomb stress on a potential seismogenic fault of a future event. The Coulomb stress change before a hypothetical earthquake is calculated based on a simple back-slip dislocation model of the event. This new algorithm, by combining the LURR method with our choice of identified area with increased Coulomb stress, is devised to improve the sensitivity of LURR to measure criticality of stress accumulation before a large earthquake. Retrospective tests of this algorithm on four large earthquakes occurred in California over the last two decades show remarkable enhancement of the LURR precursory anomalies. For some strong events of lesser magnitudes occurred in the same neighborhoods and during the same time periods, significant anomalies are found if circular areas are used, and are not found if increased Coulomb stress areas are used for LURR data selection. The unique feature of this algorithm may provide stronger constraints on forecasts of the size and location of future large events.

  13. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  14. Mapping the mineralogy and lithology of Canyonlands, Utah with imaging spectrometer data and the multiple spectral feature mapping algorithm

    NASA Technical Reports Server (NTRS)

    Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea

    1992-01-01

    The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.

  15. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  16. Two-IMU FDI performance of the sequential probability ratio test during shuttle entry

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.

  17. Contaminant source identification using semi-supervised machine learning

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir V.; Alexandrov, Boian S.; O'Malley, Daniel

    2018-05-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).

  18. Contaminant source identification using semi-supervised machine learning

    DOE PAGES

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    2017-11-08

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  19. Contaminant source identification using semi-supervised machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinov, Velimir Valentinov; Alexandrov, Boian S.; O’Malley, Dan

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical types. Numerous different geochemical constituents and processes may needmore » to be simulated in these models which further complicates the analyses. In this paper, we propose a new contaminant source identification approach that performs decomposition of the observation mixtures based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the unknown number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. NMFk is tested on synthetic and real-world site data. Finally, the NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios).« less

  20. Combining optimization methods with response spectra curve-fitting toward improved damping ratio estimation

    NASA Astrophysics Data System (ADS)

    Brewick, Patrick T.; Smyth, Andrew W.

    2016-12-01

    The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.

  1. The effects of the spatial influence function on orthotropic femur remodelling.

    PubMed

    Shang, Y; Bai, J; Peng, L

    2008-07-01

    The morphology and internal structure of bone are modulated by the mechanical stimulus. The osteocytes can sense the stimulus signals from the adjacent regions and respond to them through bone growth or bone absorption. This mechanism can be modelled as the spatial influence function (SIF) in bone adaptation algorithm. In this paper, the remodelling process was simulated in human femurs using an adaptation algorithm with and without SIF, and the trabecular bone was assumed to be orthotropic. A different influence radius and weighting factor were adopted to study the effects of the SIF on the bone density distribution and trabecular alignment. The results have shown that the mean density and L-T ratio (the ratio of longitudinal modulus to transverse modulus) had an excellent linear relationship with the weighting factor when the influence radius was small. The characteristics of density distribution and L-T ratio accorded with the actual observation or measurement when a small weighting factor was used. The large influence radius and weighting factor led to unrealistic results. In contrast, the SIF hardly affected the trabecular alignment, as the mean variation angles of principal axes were less than 1.0 degree for any influence radius and weighting factor.

  2. Computer controllable synchronous shifting of an automatic transmission

    DOEpatents

    Davis, R.I.; Patil, P.B.

    1989-08-08

    A multiple forward speed automatic transmission produces its lowest forward speed ratio when a hydraulic clutch and hydraulic brake are disengaged and a one-way clutch connects a ring gear to the transmission casing. Second forward speed ratio results when the hydraulic clutch is engaged to connect the ring gear to the planetary carrier of a second gear set. Reverse drive and regenerative operation result when an hydraulic brake fixes the planetary and the direction of power flow is reversed. Various sensors produce signals representing the torque at the output of the transmission or drive wheels, the speed of the power source, and the hydraulic pressure applied to a clutch and brake. A control algorithm produces input data representing a commanded upshift, a commanded downshift, a commanded transmission output torque, and commanded power source speed. A microprocessor processes the inputs and produces a response to them in accordance with the execution of a control algorithm. Output or response signals cause selective engagement and disengagement of the clutch and brake at a rate that satisfies the requirements for a short gear ratio change and smooth torque transfer between the friction elements. 6 figs.

  3. Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox

    NASA Astrophysics Data System (ADS)

    Li, R. N.; Liu, X.; Liu, S. J.

    2013-12-01

    In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.

  4. An energy ratio feature extraction method for optical fiber vibration signal

    NASA Astrophysics Data System (ADS)

    Sheng, Zhiyong; Zhang, Xinyan; Wang, Yanping; Hou, Weiming; Yang, Dan

    2018-03-01

    The intrusion events in the optical fiber pre-warning system (OFPS) are divided into two types which are harmful intrusion event and harmless interference event. At present, the signal feature extraction methods of these two types of events are usually designed from the view of the time domain. However, the differences of time-domain characteristics for different harmful intrusion events are not obvious, which cannot reflect the diversity of them in detail. We find that the spectrum distribution of different intrusion signals has obvious differences. For this reason, the intrusion signal is transformed into the frequency domain. In this paper, an energy ratio feature extraction method of harmful intrusion event is drawn on. Firstly, the intrusion signals are pre-processed and the power spectral density (PSD) is calculated. Then, the energy ratio of different frequency bands is calculated, and the corresponding feature vector of each type of intrusion event is further formed. The linear discriminant analysis (LDA) classifier is used to identify the harmful intrusion events in the paper. Experimental results show that the algorithm improves the recognition rate of the intrusion signal, and further verifies the feasibility and validity of the algorithm.

  5. Epidemiologic research using probabilistic outcome definitions.

    PubMed

    Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S

    2015-01-01

    Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Quantitative effects of composting state variables on C/N ratio through GA-aided multivariate analysis.

    PubMed

    Sun, Wei; Huang, Guo H; Zeng, Guangming; Qin, Xiaosheng; Yu, Hui

    2011-03-01

    It is widely known that variation of the C/N ratio is dependent on many state variables during composting processes. This study attempted to develop a genetic algorithm aided stepwise cluster analysis (GASCA) method to describe the nonlinear relationships between the selected state variables and the C/N ratio in food waste composting. The experimental data from six bench-scale composting reactors were used to demonstrate the applicability of GASCA. Within the GASCA framework, GA searched optimal sets of both specified state variables and SCA's internal parameters; SCA established statistical nonlinear relationships between state variables and the C/N ratio; to avoid unnecessary and time-consuming calculation, a proxy table was introduced to save around 70% computational efforts. The obtained GASCA cluster trees had smaller sizes and higher prediction accuracy than the conventional SCA trees. Based on the optimal GASCA tree, the effects of the GA-selected state variables on the C/N ratio were ranged in a descending order as: NH₄+-N concentration>Moisture content>Ash Content>Mean Temperature>Mesophilic bacteria biomass. Such a rank implied that the variation of ammonium nitrogen concentration, the associated temperature and the moisture conditions, the total loss of both organic matters and available mineral constituents, and the mesophilic bacteria activity, were critical factors affecting the C/N ratio during the investigated food waste composting. This first application of GASCA to composting modelling indicated that more direct search algorithms could be coupled with SCA or other multivariate analysis methods to analyze complicated relationships during composting and many other environmental processes. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Costs per Diagnosis of Acute HIV Infection in Community-based Screening Strategies: A Comparative Analysis of Four Screening Algorithms

    PubMed Central

    Hoenigl, Martin; Graff-Zivin, Joshua; Little, Susan J.

    2016-01-01

    Background. In nonhealthcare settings, widespread screening for acute human immunodeficiency virus (HIV) infection (AHI) is limited by cost and decision algorithms to better prioritize use of resources. Comparative cost analyses for available strategies are lacking. Methods. To determine cost-effectiveness of community-based testing strategies, we evaluated annual costs of 3 algorithms that detect AHI based on HIV nucleic acid amplification testing (EarlyTest algorithm) or on HIV p24 antigen (Ag) detection via Architect (Architect algorithm) or Determine (Determine algorithm) as well as 1 algorithm that relies on HIV antibody testing alone (Antibody algorithm). The cost model used data on men who have sex with men (MSM) undergoing community-based AHI screening in San Diego, California. Incremental cost-effectiveness ratios (ICERs) per diagnosis of AHI were calculated for programs with HIV prevalence rates between 0.1% and 2.9%. Results. Among MSM in San Diego, EarlyTest was cost-savings (ie, ICERs per AHI diagnosis less than $13.000) when compared with the 3 other algorithms. Cost analyses relative to regional HIV prevalence showed that EarlyTest was cost-effective (ie, ICERs less than $69.547) for similar populations of MSM with an HIV prevalence rate >0.4%; Architect was the second best alternative for HIV prevalence rates >0.6%. Conclusions. Identification of AHI by the dual EarlyTest screening algorithm is likely to be cost-effective not only among at-risk MSM in San Diego but also among similar populations of MSM with HIV prevalence rates >0.4%. PMID:26508512

  8. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  9. Discrimination of human and nonhuman blood using Raman spectroscopy with self-reference algorithm

    NASA Astrophysics Data System (ADS)

    Bian, Haiyi; Wang, Peng; Wang, Jun; Yin, Huancai; Tian, Yubing; Bai, Pengli; Wu, Xiaodong; Wang, Ning; Tang, Yuguo; Gao, Jing

    2017-09-01

    We report a self-reference algorithm to discriminate human and nonhuman blood by calculating the ratios of identification Raman peaks to reference Raman peaks and choosing appropriate threshold values. The influence of using different reference peaks and identification peaks was analyzed in detail. The Raman peak at 1003 cm-1 was proved to be a stable reference peak to avoid the influencing factors, such as the incident laser intensity and the amount of sample. The Raman peak at 1341 cm-1 was found to be an efficient identification peak, which indicates that the difference between human and nonhuman blood results from the C-H bend in tryptophan. The comparison between self-reference algorithm and partial least square method was made. It was found that the self-reference algorithm not only obtained the discrimination results with the same accuracy, but also provided information on the difference of chemical composition. In addition, the performance of self-reference algorithm whose true positive rate is 100% is significant for customs inspection to avoid genetic disclosure and forensic science.

  10. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  11. Passive Fourier-transform infrared spectroscopy of chemical plumes: an algorithm for quantitative interpretation and real-time background removal

    NASA Astrophysics Data System (ADS)

    Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.

    1995-08-01

    We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.

  12. Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.

    PubMed

    Kahl, Lorenz; Hofmann, Ulrich G

    2016-11-01

    This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  14. The development of a line-scan imaging algorithm for the detection of fecal contamination on leafy geens

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung

    2013-05-01

    This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.

  15. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  16. A new algorithm for ECG interference removal from single channel EMG recording.

    PubMed

    Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein

    2017-09-01

    This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.

  17. Diagnosis of paediatric HIV infection in a primary health care setting with a clinical algorithm.

    PubMed Central

    Horwood, C.; Liebeschuetz, S.; Blaauw, D.; Cassol, S.; Qazi, S.

    2003-01-01

    OBJECTIVE: To determine the validity of an algorithm used by primary care health workers to identify children with symptomatic human immunodeficiency virus (HIV) infection. This HIV algorithm is being implemented in South Africa as part of the Integrated Management of Childhood Illness (IMCI), a strategy that aims to improve childhood morbidity and mortality by improving care at the primary care level. As AIDS is a leading cause of death in children in southern Africa, diagnosis and management of symptomatic HIV infection was added to the existing IMCI algorithm. METHODS: In total, 690 children who attended the outpatients department in a district hospital in South Africa were assessed with the HIV algorithm and by a paediatrician. All children were then tested for HIV viral load. The validity of the algorithm in detecting symptomatic HIV was compared with clinical diagnosis by a paediatrician and the result of an HIV test. Detailed clinical data were used to improve the algorithm. FINDINGS: Overall, 198 (28.7%) enrolled children were infected with HIV. The paediatrician correctly identified 142 (71.7%) children infected with HIV, whereas the IMCI/HIV algorithm identified 111 (56.1%). Odds ratios were calculated to identify predictors of HIV infection and used to develop an improved HIV algorithm that is 67.2% sensitive and 81.5% specific in clinically detecting HIV infection. CONCLUSIONS: Children with symptomatic HIV infection can be identified effectively by primary level health workers through the use of an algorithm. The improved HIV algorithm developed in this study could be used by countries with high prevalences of HIV to enable IMCI practitioners to identify and care for HIV-infected children. PMID:14997238

  18. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  19. Empirical study of seven data mining algorithms on different characteristics of datasets for biomedical classification applications.

    PubMed

    Zhang, Yiyan; Xin, Yi; Li, Qin; Ma, Jianshe; Li, Shuai; Lv, Xiaodan; Lv, Weiqi

    2017-11-02

    Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly. In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets. The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task. No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.

  20. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Image-Guided Rendering with an Evolutionary Algorithm Based on Cloud Model

    PubMed Central

    2018-01-01

    The process of creating nonphotorealistic rendering images and animations can be enjoyable if a useful method is involved. We use an evolutionary algorithm to generate painterly styles of images. Given an input image as the reference target, a cloud model-based evolutionary algorithm that will rerender the target image with nonphotorealistic effects is evolved. The resulting animations have an interesting characteristic in which the target slowly emerges from a set of strokes. A number of experiments are performed, as well as visual comparisons, quantitative comparisons, and user studies. The average scores in normalized feature similarity of standard pixel-wise peak signal-to-noise ratio, mean structural similarity, feature similarity, and gradient similarity based metric are 0.486, 0.628, 0.579, and 0.640, respectively. The average scores in normalized aesthetic measures of Benford's law, fractal dimension, global contrast factor, and Shannon's entropy are 0.630, 0.397, 0.418, and 0.708, respectively. Compared with those of similar method, the average score of the proposed method, except peak signal-to-noise ratio, is higher by approximately 10%. The results suggest that the proposed method can generate appealing images and animations with different styles by choosing different strokes, and it would inspire graphic designers who may be interested in computer-based evolutionary art. PMID:29805440

  2. Relative risk reduction is useful metric to standardize effect size for public heath interventions for translational research.

    PubMed

    Mirzazadeh, Ali; Malekinejad, Mohsen; Kahn, James G

    2015-03-01

    Heterogeneity of effect measures in intervention studies undermines the use of evidence to inform policy. Our objective was to develop a comprehensive algorithm to convert all types of effect measures to one standard metric, relative risk reduction (RRR). This work was conducted to facilitate synthesis of published intervention effects for our epidemic modeling of the health impact of human immunodeficiency virus [HIV testing and counseling (HTC)]. We designed and implemented an algorithm to transform varied effect measures to RRR, representing the proportionate reduction in undesirable outcomes. Our extraction of 55 HTC studies identified 473 effect measures representing unique combinations of intervention-outcome-population characteristics, using five outcome metrics: pre-post proportion (70.6%), odds ratio (14.0%), mean difference (10.2%), risk ratio (4.4%), and RRR (0.9%). Outcomes were expressed as both desirable (29.5%, eg, consistent condom use) and undesirable (70.5%, eg, inconsistent condom use). Using four examples, we demonstrate our algorithm for converting varied effect measures to RRR and provide the conceptual basis for advantages of RRR over other metrics. Our review of the literature suggests that RRR, an easily understood and useful metric to convey risk reduction associated with an intervention, is underused by original and review studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  4. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  5. Design of a fuzzy differential evolution algorithm to predict non-deposition sediment transport

    NASA Astrophysics Data System (ADS)

    Ebtehaj, Isa; Bonakdari, Hossein

    2017-12-01

    Since the flow entering a sewer contains solid matter, deposition at the bottom of the channel is inevitable. It is difficult to understand the complex, three-dimensional mechanism of sediment transport in sewer pipelines. Therefore, a method to estimate the limiting velocity is necessary for optimal designs. Due to the inability of gradient-based algorithms to train Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for non-deposition sediment transport prediction, a new hybrid ANFIS method based on a differential evolutionary algorithm (ANFIS-DE) is developed. The training and testing performance of ANFIS-DE is evaluated using a wide range of dimensionless parameters gathered from the literature. The input combination used to estimate the densimetric Froude number ( Fr) parameters includes the volumetric sediment concentration ( C V ), ratio of median particle diameter to hydraulic radius ( d/R), ratio of median particle diameter to pipe diameter ( d/D) and overall friction factor of sediment ( λ s ). The testing results are compared with the ANFIS model and regression-based equation results. The ANFIS-DE technique predicted sediment transport at limit of deposition with lower root mean square error (RMSE = 0.323) and mean absolute percentage of error (MAPE = 0.065) and higher accuracy ( R 2 = 0.965) than the ANFIS model and regression-based equations.

  6. Non-invasive optical detection of esophagus cancer based on urine surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Shaohua; Wang, Lan; Chen, Weiwei; Lin, Duo; Huang, Lingling; Wu, Shanshan; Feng, Shangyuan; Chen, Rong

    2014-09-01

    A surface-enhanced Raman spectroscopy (SERS) approach was utilized for urine biochemical analysis with the aim to develop a label-free and non-invasive optical diagnostic method for esophagus cancer detection. SERS spectrums were acquired from 31 normal urine samples and 47 malignant esophagus cancer (EC) urine samples. Tentative assignments of urine SERS bands demonstrated esophagus cancer specific changes, including an increase in the relative amounts of urea and a decrease in the percentage of uric acid in the urine of normal compared with EC. The empirical algorithm integrated with linear discriminant analysis (LDA) were employed to identify some important urine SERS bands for differentiation between healthy subjects and EC urine. The empirical diagnostic approach based on the ratio of the SERS peak intensity at 527 to 1002 cm-1 and 725 to 1002 cm-1 coupled with LDA yielded a diagnostic sensitivity of 72.3% and specificity of 96.8%, respectively. The area under the receive operating characteristic (ROC) curve was 0.954, which further evaluate the performance of the diagnostic algorithm based on the ratio of the SERS peak intensity combined with LDA analysis. This work demonstrated that the urine SERS spectra associated with empirical algorithm has potential for noninvasive diagnosis of esophagus cancer.

  7. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Nonparametric relevance-shifted multiple testing procedures for the analysis of high-dimensional multivariate data with small sample sizes.

    PubMed

    Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried

    2008-01-27

    In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.

  9. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  10. Automatic segmentation of the optic nerve head for deformation measurements in video rate optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago

    2015-11-01

    Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.

  11. HF band filter bank multi-carrier spread spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laraway, Stephen Andrew; Moradi, Hussein; Farhang-Boroujeny, Behrouz

    Abstract—This paper describes modifications to the filter bank multicarrier spread spectrum (FB-MC-SS) system, that was presented in [1] and [2], to enable transmission of this waveform in the HF skywave channel. FB-MC-SS is well suited for the HF channel because it performs well in channels with frequency selective fading and interference. This paper describes new algorithms for packet detection, timing recovery and equalization that are suitable for the HF channel. Also, an algorithm for optimizing the peak to average power ratio (PAPR) of the FBMC- SS waveform is presented. Application of this algorithm results in a waveform with low PAPR.more » Simulation results using a wide band HF channel model demonstrate the robustness of this system over a wide range of delay and Doppler spreads.« less

  12. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  13. Application of multiple signal classification algorithm to frequency estimation in coherent dual-frequency lidar

    NASA Astrophysics Data System (ADS)

    Li, Ruixiao; Li, Kun; Zhao, Changming

    2018-01-01

    Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.

  14. Carbon monoxide mixing ratio inference from gas filter radiometer data

    NASA Technical Reports Server (NTRS)

    Wallio, H. A.; Reichle, H. G., Jr.; Casas, J. C.; Saylor, M. S.; Gormsen, B. B.

    1983-01-01

    A new algorithm has been developed which permits, for the first time, real time data reduction of nadir measurements taken with a gas filter correlation radiometer to determine tropospheric carbon monoxide concentrations. The algorithm significantly reduces the complexity of the equations to be solved while providing accuracy comparable to line-by-line calculations. The method is based on a regression analysis technique using a truncated power series representation of the primary instrument output signals to infer directly a weighted average of trace gas concentration. The results produced by a microcomputer-based implementation of this technique are compared with those produced by the more rigorous line-by-line methods. This algorithm has been used in the reduction of Measurement of Air Pollution from Satellites, Shuttle, and aircraft data.

  15. Objective performance assessment of five computed tomography iterative reconstruction algorithms.

    PubMed

    Omotayo, Azeez; Elbakri, Idris

    2016-11-22

    Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.

  16. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  17. Unsupervised neural spike sorting for high-density microelectrode arrays with convolutive independent component analysis.

    PubMed

    Leibig, Christian; Wachtler, Thomas; Zeck, Günther

    2016-09-15

    Unsupervised identification of action potentials in multi-channel extracellular recordings, in particular from high-density microelectrode arrays with thousands of sensors, is an unresolved problem. While independent component analysis (ICA) achieves rapid unsupervised sorting, it ignores the convolutive structure of extracellular data, thus limiting the unmixing to a subset of neurons. Here we present a spike sorting algorithm based on convolutive ICA (cICA) to retrieve a larger number of accurately sorted neurons than with instantaneous ICA while accounting for signal overlaps. Spike sorting was applied to datasets with varying signal-to-noise ratios (SNR: 3-12) and 27% spike overlaps, sampled at either 11.5 or 23kHz on 4365 electrodes. We demonstrate how the instantaneity assumption in ICA-based algorithms has to be relaxed in order to improve the spike sorting performance for high-density microelectrode array recordings. Reformulating the convolutive mixture as an instantaneous mixture by modeling several delayed samples jointly is necessary to increase signal-to-noise ratio. Our results emphasize that different cICA algorithms are not equivalent. Spike sorting performance was assessed with ground-truth data generated from experimentally derived templates. The presented spike sorter was able to extract ≈90% of the true spike trains with an error rate below 2%. It was superior to two alternative (c)ICA methods (≈80% accurately sorted neurons) and comparable to a supervised sorting. Our new algorithm represents a fast solution to overcome the current bottleneck in spike sorting of large datasets generated by simultaneous recording with thousands of electrodes. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Novel pure component contribution, mean centering of ratio spectra and factor based algorithms for simultaneous resolution and quantification of overlapped spectral signals: An application to recently co-formulated tablets of chlorzoxazone, aceclofenac and paracetamol

    NASA Astrophysics Data System (ADS)

    Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.

    2016-06-01

    In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.

  19. Simulations of Aperture Synthesis Imaging Radar for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, C.; Belyey, V.

    2012-12-01

    EISCAT_3D is a project to build the next generation of incoherent scatter radars endowed with multiple 3-dimensional capabilities that will replace the current EISCAT radars in Northern Scandinavia. Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. To demonstrate the feasibility of the antenna configurations and the imaging inversion algorithms a simulation of synthetic incoherent scattering data has been performed. The simulation algorithm incorporates the ability to control the background plasma parameters with non-homogeneous, non-stationary components over an extended 3-dimensional space. Control over the positions of a number of separated receiving antennas, their signal-to-noise-ratios and arriving phases allows realistic simulation of a multi-baseline interferometric imaging radar system. The resulting simulated data is fed into various inversion algorithms. This simulation package is a powerful tool to evaluate various antenna configurations and inversion algorithms. Results applied to realistic design alternatives of EISCAT_3D will be described.

  20. Nonlinear Finite Element Analysis of Shells with Large Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Chang, T. Y.; Sawamiphakdi, K.

    1984-01-01

    A higher order degenerated shell element with nine nodes was selected for large deformation and post-buckling analysis of thick or thin shells. Elastic-plastic material properties are also included. The post-buckling analysis algorithm is given. Using a square plate, it was demonstrated that the none-node element does not have shear locking effect even if its aspect ratio was increased to the order 10 to the 8th power. Two sample problems are given to illustrate the analysis capability of the shell element.

Top