Sample records for vital viterbi algorithm

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  2. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  3. Real-time minimal-bit-error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1974-01-01

    A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.

  4. Real-time minimal bit error probability decoding of convolutional codes

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1973-01-01

    A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.

  5. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER

    PubMed Central

    2014-01-01

    Background HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar’s striped processing pattern with Intel SSE2 instruction set extension. Results A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. Conclusions The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model’s size. PMID:24884826

  6. A study of digital holographic filters generation. Phase 2: Digital data communication system, volume 1

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mo, C. D.

    1978-01-01

    An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.

  7. High-speed architecture for the decoding of trellis-coded modulation

    NASA Technical Reports Server (NTRS)

    Osborne, William P.

    1992-01-01

    Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.

  8. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  9. Following the Viterbi Path to Deduce Flagellar Actin-Interacting Proteins of Leishmania spp.: Report on Cofilins and Twinfilins

    NASA Astrophysics Data System (ADS)

    Pacheco, Ana Carolina L.; Araújo, Fabiana F.; Kamimura, Michel T.; Medeiros, Sarah R.; Viana, Daniel A.; Oliveira, Fátima de Cássia E.; Filho, Raimundo Araújo; Costa, Marcília P.; Oliveira, Diana M.

    2007-11-01

    For performing vital cellular processes, such as motility, eukaryotic cells rely on the actin cytoskeleton, whose structure and dynamics are tightly controlled by a large number of actin-interacting (AIP) or actin-related/regulating (ARP) proteins. Trypanosomatid protozoa, such as Leishmania, rely on their flagellum for motility and sensory reception, which are believed to allow parasite migration, adhesion, invasion and even persistence on mammalian host tissues to cause disease. Actin can determine cell stiffness and transmit force during mechanotransduction, cytokinesis, cell motility and other cellular shape changes, while the identification and analyses of AIPs can help to improve understanding of their mechanical properties on physiological architectures, such as the present case regarding Leishmania flagellar apparatus. This work conveniently apply bioinformatics tools in some refined pattern recognition techniques (such as hidden Markov models (HMMs) through the Viterbi algorithm/path) in order to improve the recognition of actin-binding/interacting activity through identification of AIPs in genomes, transcriptomes and proteomes of Leishmania species. We here report cofilin and twinfilin as putative components of the flagellar apparatus, a direct bioinformatics contribution in the secondary annotation of Leishmania and trypanosomatid genomes.

  10. Space vehicle Viterbi decoder. [data converters, algorithms

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.

  11. Convolutional coding at 50 Mbps for the Shuttle Ku-band return link

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Huth, G. K.

    1976-01-01

    Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.

  12. Differential carrier phase recovery for QPSK optical coherent systems with integrated tunable lasers.

    PubMed

    Fatadin, Irshaad; Ives, David; Savory, Seb J

    2013-04-22

    The performance of a differential carrier phase recovery algorithm is investigated for the quadrature phase shift keying (QPSK) modulation format with an integrated tunable laser. The phase noise of the widely-tunable laser measured using a digital coherent receiver is shown to exhibit significant drift compared to a standard distributed feedback (DFB) laser due to enhanced low frequency noise component. The simulated performance of the differential algorithm is compared to the Viterbi-Viterbi phase estimation at different baud rates using the measured phase noise for the integrated tunable laser.

  13. Frequency Management for Electromagnetic Continuous Wave Conductivity Meters

    PubMed Central

    Mazurek, Przemyslaw; Putynkowski, Grzegorz

    2016-01-01

    Ground conductivity meters use electromagnetic fields for the mapping of geological variations, like the determination of water amount, depending on ground layers, which is important for the state analysis of embankments. The VLF band is contaminated by numerous natural and artificial electromagnetic interference signals. Prior to the determination of ground conductivity, the meter’s working frequency is not possible, due to the variable frequency of the interferences. Frequency management based on the analysis of the selected band using track-before-detect (TBD) algorithms, which allows dynamical frequency changes of the conductivity of the meter transmitting part, is proposed in the paper. Naive maximum value search, spatio-temporal TBD (ST-TBD), Viterbi TBD and a new algorithm that uses combined ST-TBD and Viterbi TBD are compared. Monte Carlo tests are provided for the numerical analysis of the properties for a single interference signal in the considered band, and a new approach based on combined ST-TBD and Viterbi algorithms shows the best performance. The considered algorithms process spectrogram data for the selected band, so DFT (Discrete Fourier Transform) could be applied for the computation of the spectrogram. Real–time properties, related to the latency, are discussed also, and it is shown that TBD algorithms are feasible for real applications. PMID:27070608

  14. Frequency Management for Electromagnetic Continuous Wave Conductivity Meters.

    PubMed

    Mazurek, Przemyslaw; Putynkowski, Grzegorz

    2016-04-07

    Ground conductivity meters use electromagnetic fields for the mapping of geological variations, like the determination of water amount, depending on ground layers, which is important for the state analysis of embankments. The VLF band is contaminated by numerous natural and artificial electromagnetic interference signals. Prior to the determination of ground conductivity, the meter's working frequency is not possible, due to the variable frequency of the interferences. Frequency management based on the analysis of the selected band using track-before-detect (TBD) algorithms, which allows dynamical frequency changes of the conductivity of the meter transmitting part, is proposed in the paper. Naive maximum value search, spatio-temporal TBD (ST-TBD), Viterbi TBD and a new algorithm that uses combined ST-TBD and Viterbi TBD are compared. Monte Carlo tests are provided for the numerical analysis of the properties for a single interference signal in the considered band, and a new approach based on combined ST-TBD and Viterbi algorithms shows the best performance. The considered algorithms process spectrogram data for the selected band, so DFT (Discrete Fourier Transform) could be applied for the computation of the spectrogram. Real-time properties, related to the latency, are discussed also, and it is shown that TBD algorithms are feasible for real applications.

  15. Influence of time and length size feature selections for human activity sequences recognition.

    PubMed

    Fang, Hongqing; Chen, Long; Srinivasan, Raghavendiran

    2014-01-01

    In this paper, Viterbi algorithm based on a hidden Markov model is applied to recognize activity sequences from observed sensors events. Alternative features selections of time feature values of sensors events and activity length size feature values are tested, respectively, and then the results of activity sequences recognition performances of Viterbi algorithm are evaluated. The results show that the selection of larger time feature values of sensor events and/or smaller activity length size feature values will generate relatively better results on the activity sequences recognition performances. © 2013 ISA Published by ISA All rights reserved.

  16. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  17. Ballistic missile precession frequency extraction based on the Viterbi & Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Longlong; Xie, Yongjie; Xu, Daping; Ren, Li

    2015-12-01

    Radar Micro-Doppler signatures are of great potential for target detection, classification and recognition. In the mid-course phase, warheads flying outside the atmosphere are usually accompanied by precession. Precession may induce additional frequency modulations on the returned radar signal, which can be regarded as a unique signature and provide additional information that is complementary to existing target recognition methods. The main purpose of this paper is to establish a more actual precession model of conical ballistic missile warhead and extract the precession parameters by utilizing Viterbi & Kalman algorithm, which improving the precession frequency estimation accuracy evidently , especially in low SNR.

  18. Global Linking of Cell Tracks Using the Viterbi Algorithm

    PubMed Central

    Jaldén, Joakim; Gilbert, Penney M.; Blau, Helen M.

    2016-01-01

    Automated tracking of living cells in microscopy image sequences is an important and challenging problem. With this application in mind, we propose a global track linking algorithm, which links cell outlines generated by a segmentation algorithm into tracks. The algorithm adds tracks to the image sequence one at a time, in a way which uses information from the complete image sequence in every linking decision. This is achieved by finding the tracks which give the largest possible increases to a probabilistically motivated scoring function, using the Viterbi algorithm. We also present a novel way to alter previously created tracks when new tracks are created, thus mitigating the effects of error propagation. The algorithm can handle mitosis, apoptosis, and migration in and out of the imaged area, and can also deal with false positives, missed detections, and clusters of jointly segmented cells. The algorithm performance is demonstrated on two challenging datasets acquired using bright-field microscopy, but in principle, the algorithm can be used with any cell type and any imaging technique, presuming there is a suitable segmentation algorithm. PMID:25415983

  19. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; A Recursive Maximum Likelihood Decoding

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.

  20. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  1. High data rate coding for the space station telemetry links.

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  2. A Comparative Study of Co-Channel Interference Suppression Techniques

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Satorius, Ed; Paparisto, Gent; Polydoros, Andreas

    1997-01-01

    We describe three methods of combatting co-channel interference (CCI): a cross-coupled phase-locked loop (CCPLL); a phase-tracking circuit (PTC), and joint Viterbi estimation based on the maximum likelihood principle. In the case of co-channel FM-modulated voice signals, the CCPLL and PTC methods typically outperform the maximum likelihood estimators when the modulation parameters are dissimilar. However, as the modulation parameters become identical, joint Viterbi estimation provides for a more robust estimate of the co-channel signals and does not suffer as much from "signal switching" which especially plagues the CCPLL approach. Good performance for the PTC requires both dissimilar modulation parameters and a priori knowledge of the co-channel signal amplitudes. The CCPLL and joint Viterbi estimators, on the other hand, incorporate accurate amplitude estimates. In addition, application of the joint Viterbi algorithm to demodulating co-channel digital (BPSK) signals in a multipath environment is also discussed. It is shown in this case that if the interference is sufficiently small, a single trellis model is most effective in demodulating the co-channel signals.

  3. Node synchronization schemes for the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Swanson, L.; Arnold, S.

    1992-01-01

    The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.

  4. DiversePathsJ: diverse shortest paths for bioimage analysis.

    PubMed

    Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael

    2018-02-01

    We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  5. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  6. Viterbi equalization for long-distance, high-speed underwater laser communication

    NASA Astrophysics Data System (ADS)

    Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao

    2017-07-01

    In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.

  7. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  8. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  9. New syndrome decoder for (n, 1) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.

  10. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  11. New Syndrome Decoding Techniques for the (n, K) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.

  12. Simplified Syndrome Decoding of (n, 1) Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1983-01-01

    A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.

  13. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  14. New syndrome decoding techniques for the (n, k) convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964

  15. Tracking Subpixel Targets with Critically Sampled Optical Sensors

    DTIC Science & Technology

    2012-09-01

    5 [32]. The Viterbi algorithm is a dynamic programming method for calculating the MAP in O(tn2) time . The most common use of this algorithm is in the... method to detect subpixel point targets using the sensor’s PSF as an identifying characteristic. Using matched filtering theory, a measure is defined to...ocean surface beneath the cloud will have a different distribution. While the basic methods will adapt to changes in cloud cover over time , it is also

  16. Optical character recognition of handwritten Arabic using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  17. Optical character recognition of handwritten Arabic using hidden Markov models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.

    2011-01-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language ismore » initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.« less

  18. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  19. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  20. CUDAMPF: a multi-tiered parallel framework for accelerating protein sequence search in HMMER on CUDA-enabled GPU.

    PubMed

    Jiang, Hanyu; Ganesan, Narayan

    2016-02-27

    HMMER software suite is widely used for analysis of homologous protein and nucleotide sequences with high sensitivity. The latest version of hmmsearch in HMMER 3.x, utilizes heuristic-pipeline which consists of MSV/SSV (Multiple/Single ungapped Segment Viterbi) stage, P7Viterbi stage and the Forward scoring stage to accelerate homology detection. Since the latest version is highly optimized for performance on modern multi-core CPUs with SSE capabilities, only a few acceleration attempts report speedup. However, the most compute intensive tasks within the pipeline (viz., MSV/SSV and P7Viterbi stages) still stand to benefit from the computational capabilities of massively parallel processors. A Multi-Tiered Parallel Framework (CUDAMPF) implemented on CUDA-enabled GPUs presented here, offers a finer-grained parallelism for MSV/SSV and Viterbi algorithms. We couple SIMT (Single Instruction Multiple Threads) mechanism with SIMD (Single Instructions Multiple Data) video instructions with warp-synchronism to achieve high-throughput processing and eliminate thread idling. We also propose a hardware-aware optimal allocation scheme of scarce resources like on-chip memory and caches in order to boost performance and scalability of CUDAMPF. In addition, runtime compilation via NVRTC available with CUDA 7.0 is incorporated into the presented framework that not only helps unroll innermost loop to yield upto 2 to 3-fold speedup than static compilation but also enables dynamic loading and switching of kernels depending on the query model size, in order to achieve optimal performance. CUDAMPF is designed as a hardware-aware parallel framework for accelerating computational hotspots within the hmmsearch pipeline as well as other sequence alignment applications. It achieves significant speedup by exploiting hierarchical parallelism on single GPU and takes full advantage of limited resources based on their own performance features. In addition to exceeding performance of other acceleration attempts, comprehensive evaluations against high-end CPUs (Intel i5, i7 and Xeon) shows that CUDAMPF yields upto 440 GCUPS for SSV, 277 GCUPS for MSV and 14.3 GCUPS for P7Viterbi all with 100 % accuracy, which translates to a maximum speedup of 37.5, 23.1 and 11.6-fold for MSV, SSV and P7Viterbi respectively. The source code is available at https://github.com/Super-Hippo/CUDAMPF.

  1. System Framework for a Multi-Band, Multi-Mode Software Defined Radio

    DTIC Science & Technology

    2014-06-01

    detection, while the VITA Radio Transport ( VRT ) protocol over Gigabit Ethernet (GIGE) is implemented for the data interface. In addition to the SoC...CTRL VGA CTRL C2 GPP C2 CORE SW ARM0 RX SYN CTRL PL MEMORY MAP DR CTRL GENERIC INTERRUPT CONTROLLER DR GPP VITERBI ALGORITHM & VRT INTERFACE ARM1

  2. Enhanced decoding for the Galileo S-band mission

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1993-01-01

    A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.

  3. Viterbi decoder node synchronization losses in the Reed-Solomon/Veterbi concatenated channel

    NASA Technical Reports Server (NTRS)

    Deutsch, L. J.; Miller, R. L.

    1982-01-01

    The Viterbi decoders currently used by the Deep Space Network (DSN) employ an algorithm for maintaining node synchronization that significantly degrades at bit signal-to-noise ratios (SNRs) of below 2.0 dB. In a recent report by the authors, it was shown that the telemetry receiving system, which uses a convolutionally encoded downlink, will suffer losses of 0.85 dB and 1.25 dB respectively at Voyager 2 Uranus and Neptune encounters. This report extends the results of that study to a concatenated (255,223) Reed-Solomon/(7, 1/2) convolutionally coded channel, by developing a new radio loss model for the concatenated channel. It is shown here that losses due to improper node synchronization of 0.57 dB at Uranus and 1.0 dB at Neptune can be expected if concatenated coding is used along with an array of one 64-meter and three 34-meter antennas.

  4. Soft-output decoding algorithms in iterative decoding of turbo codes

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.

    1996-01-01

    In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.

  5. A comparison of frame synchronization methods. [Deep Space Network

    NASA Technical Reports Server (NTRS)

    Swanson, L.

    1982-01-01

    Different methods are considered for frame synchronization of a concatenated block code/Viterbi link. Synchronization after Viterbi decoding, synchronization before Viterbi decoding based on hard-quantized channel symbols are all compared. For each scheme, the probability under certain conditions of true detection of sync within four 10,000 bit frames is tabulated.

  6. A proposed technique for the Venus balloon telemetry and Doppler frequency recovery

    NASA Technical Reports Server (NTRS)

    Jurgens, R. F.; Divsalar, D.

    1985-01-01

    A technique is proposed to accurately estimate the Doppler frequency and demodulate the digitally encoded telemetry signal that contains the measurements from balloon instruments. Since the data are prerecorded, one can take advantage of noncausal estimators that are both simpler and more computationally efficient than the usual closed-loop or real-time estimators for signal detection and carrier tracking. Algorithms for carrier frequency estimation subcarrier demodulation, bit and frame synchronization are described. A Viterbi decoder algorithm using a branch indexing technique has been devised to decode constraint length 6, rate 1/2 convolutional code that is being used by the balloon transmitter. These algorithms are memory efficient and can be implemented on microcomputer systems.

  7. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  8. Maximum-likelihood soft-decision decoding of block codes using the A* algorithm

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.

    1994-01-01

    The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.

  9. Optimizing the Performance of Radionuclide Identification Software in the Hunt for Nuclear Security Threats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fotion, Katherine A.

    2016-08-18

    The Radionuclide Analysis Kit (RNAK), my team’s most recent nuclide identification software, is entering the testing phase. A question arises: will removing rare nuclides from the software’s library improve its overall performance? An affirmative response indicates fundamental errors in the software’s framework, while a negative response confirms the effectiveness of the software’s key machine learning algorithms. After thorough testing, I found that the performance of RNAK cannot be improved with the library choice effect, thus verifying the effectiveness of RNAK’s algorithms—multiple linear regression, Bayesian network using the Viterbi algorithm, and branch and bound search.

  10. Under-reported data analysis with INAR-hidden Markov chains.

    PubMed

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  12. A Time Diversity Coding Experiment for a UHF/VHF Satellite Channel with Scintillation: Equipment Description

    DTIC Science & Technology

    1977-09-01

    to state as successive input bits are brought into the encoder. We can more easily follow our progress on the equivalent lattice diagram where...Pg.Pj.. STATE DIAGRAM INPUT PATH i ,i.,i ,L.. = 1001 1’ 2’𔃽’ V Fig. 12. Convolutional Encoder, State Diagram and Lattice . 39 represented...and can in fact be traced. The Viterbi algorithm can be simply described with the aid of this lattice . Note that the nodes of the lattice represent

  13. Investigation of the Use of Erasures in a Concatenated Coding Scheme

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Marriott, Philip J.

    1997-01-01

    A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.

  14. Evaluation of three coding schemes designed for improved data communication

    NASA Technical Reports Server (NTRS)

    Snelsire, R. W.

    1974-01-01

    Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.

  15. Viterbi decoding for satellite and space communication.

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Jacobs, I. M.

    1971-01-01

    Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

  16. PySeqLab: an open source Python package for sequence labeling and segmentation.

    PubMed

    Allam, Ahmed; Krauthammer, Michael

    2017-11-01

    Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.

    1997-01-01

    This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  18. Good trellises for IC implementation of viterbi decoders for linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.

    1996-01-01

    This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.

  19. Hidden Markov model analysis of force/torque information in telemanipulation

    NASA Technical Reports Server (NTRS)

    Hannaford, Blake; Lee, Paul

    1991-01-01

    A model for the prediction and analysis of sensor information recorded during robotic performance of telemanipulation tasks is presented. The model uses the hidden Markov model to describe the task structure, the operator's or intelligent controller's goal structure, and the sensor signals. A methodology for constructing the model parameters based on engineering knowledge of the task is described. It is concluded that the model and its optimal state estimation algorithm, the Viterbi algorithm, are very succesful at the task of segmenting the data record into phases corresponding to subgoals of the task. The model provides a rich modeling structure within a statistical framework, which enables it to represent complex systems and be robust to real-world sensory signals.

  20. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  1. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  2. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  3. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  4. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  5. Equalization of nonlinear transmission impairments by maximum-likelihood-sequence estimation in digital coherent receivers.

    PubMed

    Khairuzzaman, Md; Zhang, Chao; Igarashi, Koji; Katoh, Kazuhiro; Kikuchi, Kazuro

    2010-03-01

    We describe a successful introduction of maximum-likelihood-sequence estimation (MLSE) into digital coherent receivers together with finite-impulse response (FIR) filters in order to equalize both linear and nonlinear fiber impairments. The MLSE equalizer based on the Viterbi algorithm is implemented in the offline digital signal processing (DSP) core. We transmit 20-Gbit/s quadrature phase-shift keying (QPSK) signals through a 200-km-long standard single-mode fiber. The bit-error rate performance shows that the MLSE equalizer outperforms the conventional adaptive FIR filter, especially when nonlinear impairments are predominant.

  6. DNA base-calling from a nanopore using a Viterbi algorithm.

    PubMed

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Envelope detection using temporal magnetization dynamics of resonantly interacting spin-torque oscillator

    NASA Astrophysics Data System (ADS)

    Nakamura, Y.; Nishikawa, M.; Osawa, H.; Okamoto, Y.; Kanao, T.; Sato, R.

    2018-05-01

    In this article, we propose the detection method of the recorded data pattern by the envelope of the temporal magnetization dynamics of resonantly interacting spin-torque oscillator on the microwave assisted magnetic recording for three-dimensional magnetic recording. We simulate the envelope of the waveform from recorded dots with the staggered magnetization configuration, which are calculated by using a micromagnetic simulation. We study the data detection methods for the envelope and propose a soft-output Viterbi algorithm (SOVA) for partial response (PR) system as a signal processing system for three dimensional magnetic recording.

  8. An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System

    PubMed Central

    Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin

    2016-01-01

    With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053

  9. Hybrid and concatenated coding applications.

    NASA Technical Reports Server (NTRS)

    Hofman, L. B.; Odenwalder, J. P.

    1972-01-01

    Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.

  10. Frequency Hopping, Multiple Frequency-Shift Keying, Coding, and Optimal Partial-Band Jamming.

    DTIC Science & Technology

    1982-08-01

    receivers appropriate for these two strategies. Each receiver is noncoherent (a coherent receiver is generally impractical) and implements hard...Advances in Coding and Modulation for Noncoherent Channels Affected by Fading, Partial Band, and Multiple- . Access Interference, in A. J. Viterbi...Modulation for Noncoherent Channels Affected by Fading, Partial Band, and Multiple-Access interference, in A. J. Viterbi, ed., Advances in Coumunication

  11. Design and Implementation of Viterbi Decoder Using VHDL

    NASA Astrophysics Data System (ADS)

    Thakur, Akash; Chattopadhyay, Manju K.

    2018-03-01

    A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Marcus H.; Brown, James B.

    This software implements the first base caller for nanopore data that calls bases directly from raw data. The basecRAWller algorithm has two major advantages over current nanopore base calling software: (1) streaming base calling and (2) base calling from information rich raw signal. The ability to perform truly streaming base calling as signal is received from the sequencer can be very powerful as this is one of the major advantages of this technology as compared to other sequencing technologies. As such enabling as much streaming potential as possible will be incredibly important as this technology continues to become more widelymore » applied in biosciences. All other base callers currently employ the Viterbi algorithm which requires the whole sequence to employ the complete base calling procedure and thus precludes a natural streaming base calling procedure. The other major advantage of the basecRAWller algorithm is the prediction of bases from raw signal which contains much richer information than the segmented chunks that current algorithms employ. This leads to the potential for much more accurate base calls which would make this technology much more valuable to all of the growing user base for this technology.« less

  13. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.

  14. Large constraint length high speed viterbi decoder based on a modular hierarchial decomposition of the deBruijn graph

    NASA Technical Reports Server (NTRS)

    Collins, Oliver (Inventor); Dolinar, Jr., Samuel J. (Inventor); Hus, In-Shek (Inventor); Bozzola, Fabrizio P. (Inventor); Olson, Erlend M. (Inventor); Statman, Joseph I. (Inventor); Zimmerman, George A. (Inventor)

    1991-01-01

    A method of formulating and packaging decision-making elements into a long constraint length Viterbi decoder which involves formulating the decision-making processors as individual Viterbi butterfly processors that are interconnected in a deBruijn graph configuration. A fully distributed architecture, which achieves high decoding speeds, is made feasible by novel wiring and partitioning of the state diagram. This partitioning defines universal modules, which can be used to build any size decoder, such that a large number of wires is contained inside each module, and a small number of wires is needed to connect modules. The total system is modular and hierarchical, and it implements a large proportion of the required wiring internally within modules and may include some external wiring to fully complete the deBruijn graph. pg,14.

  15. Channel coding for underwater acoustic single-carrier CDMA communication system

    NASA Astrophysics Data System (ADS)

    Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong

    2017-01-01

    CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.

  16. Hidden Markov model tracking of continuous gravitational waves from young supernova remnants

    NASA Astrophysics Data System (ADS)

    Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.

    2018-02-01

    Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.

  17. Optimal decoding in fading channels - A combined envelope, multiple differential and coherent detection approach

    NASA Astrophysics Data System (ADS)

    Makrakis, Dimitrios; Mathiopoulos, P. Takis

    A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.

  18. Building a biomedical tokenizer using the token lattice design pattern and the adapted Viterbi algorithm

    PubMed Central

    2011-01-01

    Background Tokenization is an important component of language processing yet there is no widely accepted tokenization method for English texts, including biomedical texts. Other than rule based techniques, tokenization in the biomedical domain has been regarded as a classification task. Biomedical classifier-based tokenizers either split or join textual objects through classification to form tokens. The idiosyncratic nature of each biomedical tokenizer’s output complicates adoption and reuse. Furthermore, biomedical tokenizers generally lack guidance on how to apply an existing tokenizer to a new domain (subdomain). We identify and complete a novel tokenizer design pattern and suggest a systematic approach to tokenizer creation. We implement a tokenizer based on our design pattern that combines regular expressions and machine learning. Our machine learning approach differs from the previous split-join classification approaches. We evaluate our approach against three other tokenizers on the task of tokenizing biomedical text. Results Medpost and our adapted Viterbi tokenizer performed best with a 92.9% and 92.4% accuracy respectively. Conclusions Our evaluation of our design pattern and guidelines supports our claim that the design pattern and guidelines are a viable approach to tokenizer construction (producing tokenizers matching leading custom-built tokenizers in a particular domain). Our evaluation also demonstrates that ambiguous tokenizations can be disambiguated through POS tagging. In doing so, POS tag sequences and training data have a significant impact on proper text tokenization. PMID:21658288

  19. Enhanced decoding for the Galileo low-gain antenna mission: Viterbi redecoding with four decoding stages

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Belongie, M.

    1995-01-01

    The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.

  20. A VLSI decomposition of the deBruijn graph

    NASA Technical Reports Server (NTRS)

    Collins, O.; Dolinar, S.; Mceliece, R.; Pollara, F.

    1990-01-01

    A new Viterbi decoder for convolutional codes with constraint lengths up to 15, called the Big Viterbi Decoder, is under development for the Deep Space Network. It will be demonstrated by decoding data from the Galileo spacecraft, which has a rate 1/4, constraint-length 15 convolutional encoder on board. Here, the mathematical theory underlying the design of the very-large-scale-integrated (VLSI) chips that are being used to build this decoder is explained. The deBruijn graph B sub n describes the topology of a fully parallel, rate 1/v, constraint length n+2 Viterbi decoder, and it is shown that B sub n can be built by appropriately wiring together (i.e., connecting together with extra edges) many isomorphic copies of a fixed graph called a B sub n building block. The efficiency of such a building block is defined as the fraction of the edges in B sub n that are present in the copies of the building block. It is shown, among other things, that for any alpha less than 1, there exists a graph G which is a B sub n building block of efficiency greater than alpha for all sufficiently large n. These results are illustrated by describing a special hierarchical family of deBruijn building blocks, which has led to the design of the gate-array chips being used in the Big Viterbi Decoder.

  1. Viterbi sparse spike detection and a compositional origin to ultralow-velocity zones

    NASA Astrophysics Data System (ADS)

    Brown, Samuel Paul

    Accurate interpretation of seismic travel times and amplitudes in both the exploration and global scales is complicated by the band-limited nature of seismic data. We present a stochastic method, Viterbi sparse spike detection (VSSD), to reduce a seismic waveform into a most probable constituent spike train. Model waveforms are constructed from a set of candidate spike trains convolved with a source wavelet estimate. For each model waveform, a profile hidden Markov model (HMM) is constructed to represent the waveform as a stochastic generative model with a linear topology corresponding to a sequence of samples. The Viterbi algorithm is employed to simultaneously find the optimal nonlinear alignment between a model waveform and the seismic data, and to assign a score to each candidate spike train. The most probable travel times and amplitudes are inferred from the alignments of the highest scoring models. Our analyses show that the method can resolve closely spaced arrivals below traditional resolution limits and that travel time estimates are robust in the presence of random noise and source wavelet errors. We applied the VSSD method to constrain the elastic properties of a ultralow- velocity zone (ULVZ) at the core-mantle boundary beneath the Coral Sea. We analyzed vertical component short period ScP waveforms for 16 earthquakes occurring in the Tonga-Fiji trench recorded at the Alice Springs Array (ASAR) in central Australia. These waveforms show strong pre and postcursory seismic arrivals consistent with ULVZ layering. We used the VSSD method to measure differential travel-times and amplitudes of the post-cursor arrival ScSP and the precursor arrival SPcP relative to ScP. We compare our measurements to a database of approximately 340,000 synthetic seismograms finding that these data are best fit by a ULVZ model with an S-wave velocity reduction of 24%, a P-wave velocity reduction of 23%, a thickness of 8.5 km, and a density increase of 6%. We simultaneously constrain both P- and S-wave velocity reductions as a 1:1 ratio inside this ULVZ. This 1:1 ratio is not consistent with a partial melt origin to ULVZs. Rather, we demonstrate that a compositional origin is more likely.

  2. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    PubMed Central

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  3. Modeling Driver Behavior near Intersections in Hidden Markov Model

    PubMed Central

    Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei

    2016-01-01

    Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers’ behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents. PMID:28009838

  4. On the Latent Variable Interpretation in Sum-Product Networks.

    PubMed

    Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro

    2017-10-01

    One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.

  5. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  6. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  7. Large-Constraint-Length, Fast Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.

    1990-01-01

    Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.

  8. Convolutional code performance in planetary entry channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.

    1974-01-01

    The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.

  9. A complex valued radial basis function network for equalization of fast time varying channels.

    PubMed

    Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R

    1999-01-01

    This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.

  10. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  11. Accurate Heart Rate Monitoring During Physical Exercises Using PPG.

    PubMed

    Temko, Andriy

    2017-09-01

    The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.

  12. Testing interconnected VLSI circuits in the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.

  13. Performance of concatenated Reed-Solomon/Viterbi channel coding

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Yuen, J. H.

    1982-01-01

    The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.

  14. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  15. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    PubMed

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  16. Least reliable bits coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Budinger, James; Wagner, Paul

    1992-01-01

    LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  17. A long constraint length VLSI Viterbi decoder for the DSN

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.

    1988-01-01

    A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.

  18. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  19. A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.

    1998-01-01

    Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.

  20. The proposed coding standard at GSFC

    NASA Technical Reports Server (NTRS)

    Morakis, J. C.; Helgert, H. J.

    1977-01-01

    As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.

  1. A digital communications system for manned spaceflight applications.

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.

    1973-01-01

    A highly efficient, all-digital communications signal design employing convolutional coding and PN spectrum spreading is described for two-way transmission of voice and data between a manned spacecraft and ground. Variable-slope delta modulation is selected for analog/digital conversion of the voice signal, and a convolutional decoder utilizing the Viterbi decoding algorithm is selected for use at each receiving terminal. A PN spread spectrum technique is implemented to protect against multipath effects and to reduce the energy density (per unit bandwidth) impinging on the earth's surface to a value within the guidelines adopted by international agreement. Performance predictions are presented for transmission via a TDRS (tracking and data relay satellite) system and for direct transmission between the spacecraft and earth. Hardware estimates are provided for a flight-qualified communications system employing the coded digital signal design.

  2. A hybrid smartphone indoor positioning solution for mobile LBS.

    PubMed

    Liu, Jingbin; Chen, Ruizhi; Pei, Ling; Guinness, Robert; Kuusniemi, Heidi

    2012-12-12

    Smartphone positioning is an enabling technology used to create new business in the navigation and mobile location-based services (LBS) industries. This paper presents a smartphone indoor positioning engine named HIPE that can be easily integrated with mobile LBS. HIPE is a hybrid solution that fuses measurements of smartphone sensors with wireless signals. The smartphone sensors are used to measure the user's motion dynamics information (MDI), which represent the spatial correlation of various locations. Two algorithms based on hidden Markov model (HMM) problems, the grid-based filter and the Viterbi algorithm, are used in this paper as the central processor for data fusion to resolve the position estimates, and these algorithms are applicable for different applications, e.g., real-time navigation and location tracking, respectively. HIPE is more widely applicable for various motion scenarios than solutions proposed in previous studies because it uses no deterministic motion models, which have been commonly used in previous works. The experimental results showed that HIPE can provide adequate positioning accuracy and robustness for different scenarios of MDI combinations. HIPE is a cost-efficient solution, and it can work flexibly with different smartphone platforms, which may have different types of sensors available for the measurement of MDI data. The reliability of the positioning solution was found to increase with increasing precision of the MDI data.

  3. Global navigation satellite system receiver for weak signals under all dynamic conditions

    NASA Astrophysics Data System (ADS)

    Ziedan, Nesreen Ibrahim

    The ability of the Global Navigation Satellite System (GNSS) receiver to work under weak signal and various dynamic conditions is required in some applications. For example, to provide a positioning capability in wireless devices, or orbit determination of Geostationary and high Earth orbit satellites. This dissertation develops Global Positioning System (GPS) receiver algorithms for such applications. Fifteen algorithms are developed for the GPS C/A signal. They cover all the receiver main functions, which include acquisition, fine acquisition, bit synchronization, code and carrier tracking, and navigation message decoding. They are integrated together, and they can be used in any software GPS receiver. They also can be modified to fit any other GPS or GNSS signals. The algorithms have new capabilities. The processing and memory requirements are considered in the design to allow the algorithms to fit the limited resources of some applications; they do not require any assisting information. Weak signals can be acquired in the presence of strong interfering signals and under high dynamic conditions. The fine acquisition, bit synchronization, and tracking algorithms are based on the Viterbi algorithm and Extended Kalman filter approaches. The tracking algorithms capabilities increase the time to lose lock. They have the ability to adaptively change the integration length and the code delay separation. More than one code delay separation can be used in the same time. Large tracking errors can be detected and then corrected by a re-initialization and an acquisition-like algorithms. Detecting the navigation message is needed to increase the coherent integration; decoding it is needed to calculate the navigation solution. The decoding algorithm utilizes the message structure to enable its decoding for signals with high Bit Error Rate. The algorithms are demonstrated using simulated GPS C/A code signals, and TCXO clocks. The results have shown the algorithms ability to reliably work with 15 dB-Hz signals and acceleration over 6 g.

  4. Automated Cough Assessment on a Mobile Platform

    PubMed Central

    2014-01-01

    The development of an Automated System for Asthma Monitoring (ADAM) is described. This consists of a consumer electronics mobile platform running a custom application. The application acquires an audio signal from an external user-worn microphone connected to the device analog-to-digital converter (microphone input). This signal is processed to determine the presence or absence of cough sounds. Symptom tallies and raw audio waveforms are recorded and made easily accessible for later review by a healthcare provider. The symptom detection algorithm is based upon standard speech recognition and machine learning paradigms and consists of an audio feature extraction step followed by a Hidden Markov Model based Viterbi decoder that has been trained on a large database of audio examples from a variety of subjects. Multiple Hidden Markov Model topologies and orders are studied. Performance of the recognizer is presented in terms of the sensitivity and the rate of false alarm as determined in a cross-validation test. PMID:25506590

  5. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  6. Advanced modulation technology development for earth station demodulator applications. Coded modulation system development

    NASA Technical Reports Server (NTRS)

    Miller, Susan P.; Kappes, J. Mark; Layer, David H.; Johnson, Peter N.

    1990-01-01

    A jointly optimized coded modulation system is described which was designed, built, and tested by COMSAT Laboratories for NASA LeRC which provides a bandwidth efficiency of 2 bits/s/Hz at an information rate of 160 Mbit/s. A high speed rate 8/9 encoder with a Viterbi decoder and an Octal PSK modem are used to achieve this. The BER performance is approximately 1 dB from the theoretically calculated value for this system at a BER of 5 E-7 under nominal conditions. The system operates in burst mode for downlink applications and tests have demonstrated very little degradation in performance with frequency and level offset. Unique word miss rate measurements were conducted which demonstrate reliable acquisition at low values of Eb/No. Codec self tests have verified the performance of this subsystem in a stand alone mode. The codec is capable of operation at a 200 Mbit/s information rate as demonstrated using a codec test set which introduces noise digitally. The measured performance is within 0.2 dB of the computer simulated predictions. A gate array implementation of the most time critical element of the high speed Viterbi decoder was completed. This gate array add-compare-select chip significantly reduces the power consumption and improves the manufacturability of the decoder. This chip has general application in the implementation of high speed Viterbi decoders.

  7. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  8. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  9. On testing VLSI chips for the big Viterbi decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.

    1989-01-01

    A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.

  10. Simpler Alternative to an Optimum FQPSK-B Viterbi Receiver

    NASA Technical Reports Server (NTRS)

    Lee, Dennis; Simon, Marvin; Yan, Tsun-Yee

    2003-01-01

    A reduced-complexity alternative to an optimum FQPSK-B Viterbi receiver has been invented. As described, the reduction in complexity is achieved at the cost of only a small reduction in power performance [performance expressed in terms of a bit-energy-to-noise-energy ratio (Eb/N0) for a given bit-error rate (BER)]. The term "FQPSK-B" denotes a baseband-filtered version of Feher quadrature-phase-shift keying, which is a patented, bandwidth-efficient phase-modulation scheme named after its inventor. Heretofore, commercial FQPSK-B receivers have performed symbol-by-symbol detection, in each case using a detection filter (either the proprietary FQPSK-B filter for better BER performance, or a simple integrate-and-dump filter with degraded performance) and a sample-and-hold circuit.

  11. A Hybrid Smartphone Indoor Positioning Solution for Mobile LBS

    PubMed Central

    Liu, Jingbin; Chen, Ruizhi; Pei, Ling; Guinness, Robert; Kuusniemi, Heidi

    2012-01-01

    Smartphone positioning is an enabling technology used to create new business in the navigation and mobile location-based services (LBS) industries. This paper presents a smartphone indoor positioning engine named HIPE that can be easily integrated with mobile LBS. HIPE is a hybrid solution that fuses measurements of smartphone sensors with wireless signals. The smartphone sensors are used to measure the user’s motion dynamics information (MDI), which represent the spatial correlation of various locations. Two algorithms based on hidden Markov model (HMM) problems, the grid-based filter and the Viterbi algorithm, are used in this paper as the central processor for data fusion to resolve the position estimates, and these algorithms are applicable for different applications, e.g., real-time navigation and location tracking, respectively. HIPE is more widely applicable for various motion scenarios than solutions proposed in previous studies because it uses no deterministic motion models, which have been commonly used in previous works. The experimental results showed that HIPE can provide adequate positioning accuracy and robustness for different scenarios of MDI combinations. HIPE is a cost-efficient solution, and it can work flexibly with different smartphone platforms, which may have different types of sensors available for the measurement of MDI data. The reliability of the positioning solution was found to increase with increasing precision of the MDI data. PMID:23235455

  12. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  13. Reduced complexity of multi-track joint 2-D Viterbi detectors for bit-patterned media recording channel

    NASA Astrophysics Data System (ADS)

    Myint, L. M. M.; Warisarn, C.

    2017-05-01

    Two-dimensional (2-D) interference is one of the prominent challenges in ultra-high density recording system such as bit patterned media recording (BPMR). The multi-track joint 2-D detection technique with the help of the array-head reading can tackle this problem effectively by jointly processing the multiple readback signals from the adjacent tracks. Moreover, it can robustly alleviate the impairments due to track mis-registration (TMR) and media noise. However, the computational complexity of such detectors is normally too high and hard to implement in a reality, even for a few multiple tracks. Therefore, in this paper, we mainly focus on reducing the complexity of multi-track joint 2-D Viterbi detector without paying a large penalty in terms of the performance. We propose a simplified multi-track joint 2-D Viterbi detector with a manageable complexity level for the BPMR's multi-track multi-head (MTMH) system. In the proposed method, the complexity of detector's trellis is reduced with the help of the joint-track equalization method which employs 1-D equalizers and 2-D generalized partial response (GPR) target. Moreover, we also examine the performance of a full-fledged multi-track joint 2-D detector and the conventional 2-D detection. The results show that the simplified detector can perform close to the full-fledge detector, especially when the system faces high media noise, with the significant low complexity.

  14. Experimental investigation of extended Kalman Filter combined with carrier phase recovery for 16-QAM system

    NASA Astrophysics Data System (ADS)

    Shu, Tong; Li, Yan; Yu, Miao; Zhang, Yifan; Zhou, Honghang; Qiu, Jifang; Guo, Hongxiang; Hong, Xiaobin; Wu, Jian

    2018-02-01

    Performance of Extended Kalman Filter combined with the Viterbi-Viterbi phase estimation (VVPE-EKF) for joint phase noise mitigation and amplitude noise equalization is experimental demonstrated. Experimental results show that, for 11.2 Gbaud SP-16-QAM, the proposed VVPE-EKF achieves 0.9 dB required OSNR reduction at bit error ratio (BER) of 3.8e-3 compared to the VVPE. The result of maximum likelihood combined with VVPE (VVPE-ML) is only 0.3 dB. For 28 GBaud SP-16-QAM signal, VVPE-EKF achieves 3 dB required OSNR reduction at BER=3.8e-3 (7% HD-FEC threshold) compared to VVPE. And VVPE-ML can reduce the required OSNR for 1.7 dB compared to the VVPE. VVPE-EKF outperforms DD-EKF 3.7 dB and 0.7 dB for 11.2 GBaud and 28 GBaud system, respectively.

  15. FEC combined burst-modem for business satellite communications use

    NASA Astrophysics Data System (ADS)

    Murakami, K.; Miyake, M.; Fuji, T.; Moritani, Y.; Fujino, T.

    The authors recently developed two types of FEC (forward error correction) combined modems both applicable to low-data-rate and intermediate-data-rate TDMA international satellite communications. Each FEC combined modem consists of a QPSK (quadrature phase-shift keyed) modem, a convolutional encoder, and a Viterbi decoder. Both modems are designed taking into consideration the fast acquisition of the carrier and bit timing and the low cycle slipping rate in the low-carrier-to-noise-ratio environment. Attention is paid to designing the Viterbi decoder to be operated in a situation in which successive bursts may have different coding rates according to the punctured coding scheme. The overall scheme of the FEC combined modems are presented, and some of the key technologies applied in developing them are outlined. The hardware implementation and experimentation are also discussed. The measured data are compared with results of theoretical analysis, and relatively good performances are obtained.

  16. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    NASA Astrophysics Data System (ADS)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  17. A Context-Recognition-Aided PDR Localization Method Based on the Hidden Markov Model

    PubMed Central

    Lu, Yi; Wei, Dongyan; Lai, Qifeng; Li, Wen; Yuan, Hong

    2016-01-01

    Indoor positioning has recently become an important field of interest because global navigation satellite systems (GNSS) are usually unavailable in indoor environments. Pedestrian dead reckoning (PDR) is a promising localization technique for indoor environments since it can be implemented on widely used smartphones equipped with low cost inertial sensors. However, the PDR localization severely suffers from the accumulation of positioning errors, and other external calibration sources should be used. In this paper, a context-recognition-aided PDR localization model is proposed to calibrate PDR. The context is detected by employing particular human actions or characteristic objects and it is matched to the context pre-stored offline in the database to get the pedestrian’s location. The Hidden Markov Model (HMM) and Recursive Viterbi Algorithm are used to do the matching, which reduces the time complexity and saves the storage. In addition, the authors design the turn detection algorithm and take the context of corner as an example to illustrate and verify the proposed model. The experimental results show that the proposed localization method can fix the pedestrian’s starting point quickly and improves the positioning accuracy of PDR by 40.56% at most with perfect stability and robustness at the same time. PMID:27916922

  18. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  19. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  20. Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding

    NASA Astrophysics Data System (ADS)

    Jeong, Seongkwon; Lee, Jaejin

    2018-05-01

    The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.

  1. Advanced imaging communication system

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1977-01-01

    Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.

  2. Incorporating sequence information into the scoring function: a hidden Markov model for improved peptide identification.

    PubMed

    Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C

    2008-03-01

    The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.

  3. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  4. Frame synchronization for the Galileo code

    NASA Technical Reports Server (NTRS)

    Arnold, S.; Swanson, L.

    1991-01-01

    Results are reported on the performance of the Deep Space Network's frame synchronizer for the (15,1/4) convolutional code after Viterbi decoding. The threshold is found that optimizes the probability of acquiring true sync within four frames using a strategy that requires next frame verification.

  5. Optimum Code Rates for Noncoherent MFSK with Errors and Erasures Decoding over Rayleigh Fading Channels

    NASA Technical Reports Server (NTRS)

    Ritcey, Adina Matache James A.

    1997-01-01

    In this paper, we analyze the performance of a communication system employing M-ary frequency shift keying (FSK) modulation with errors-and-erasures decoding using Viterbi ratio threshold technique for erasure insertion, in Rayleigh fading and AWGN channels.

  6. A clinical decision-making mechanism for context-aware and patient-specific remote monitoring systems using the correlations of multiple vital signs.

    PubMed

    Forkan, Abdur Rahim Mohammad; Khalil, Ibrahim

    2017-02-01

    In home-based context-aware monitoring patient's real-time data of multiple vital signs (e.g. heart rate, blood pressure) are continuously generated from wearable sensors. The changes in such vital parameters are highly correlated. They are also patient-centric and can be either recurrent or can fluctuate. The objective of this study is to develop an intelligent method for personalized monitoring and clinical decision support through early estimation of patient-specific vital sign values, and prediction of anomalies using the interrelation among multiple vital signs. In this paper, multi-label classification algorithms are applied in classifier design to forecast these values and related abnormalities. We proposed a completely new approach of patient-specific vital sign prediction system using their correlations. The developed technique can guide healthcare professionals to make accurate clinical decisions. Moreover, our model can support many patients with various clinical conditions concurrently by utilizing the power of cloud computing technology. The developed method also reduces the rate of false predictions in remote monitoring centres. In the experimental settings, the statistical features and correlations of six vital signs are formulated as multi-label classification problem. Eight multi-label classification algorithms along with three fundamental machine learning algorithms are used and tested on a public dataset of 85 patients. Different multi-label classification evaluation measures such as Hamming score, F1-micro average, and accuracy are used for interpreting the prediction performance of patient-specific situation classifications. We achieved 90-95% Hamming score values across 24 classifier combinations for 85 different patients used in our experiment. The results are compared with single-label classifiers and without considering the correlations among the vitals. The comparisons show that multi-label method is the best technique for this problem domain. The evaluation results reveal that multi-label classification techniques using the correlations among multiple vitals are effective ways for early estimation of future values of those vitals. In context-aware remote monitoring this process can greatly help the doctors in quick diagnostic decision making. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Automatically-computed prehospital severity scores are equivalent to scores based on medic documentation.

    PubMed

    Reisner, Andrew T; Chen, Liangyou; McKenna, Thomas M; Reifman, Jaques

    2008-10-01

    Prehospital severity scores can be used in routine prehospital care, mass casualty care, and military triage. If computers could reliably calculate clinical scores, new clinical and research methodologies would be possible. One obstacle is that vital signs measured automatically can be unreliable. We hypothesized that Signal Quality Indices (SQI's), computer algorithms that differentiate between reliable and unreliable monitored physiologic data, could improve the predictive power of computer-calculated scores. In a retrospective analysis of trauma casualties transported by air ambulance, we computed the Triage Revised Trauma Score (RTS) from archived travel monitor data. We compared the areas-under-the-curve (AUC's) of receiver operating characteristic curves for prediction of mortality and red blood cell transfusion for 187 subjects with comparable quantities of good-quality and poor-quality data. Vital signs deemed reliable by SQI's led to significantly more discriminatory severity scores than vital signs deemed unreliable. We also compared automatically-computed RTS (using the SQI's) versus RTS computed from vital signs documented by medics. For the subjects in whom the SQI algorithms identified 15 consecutive seconds of reliable vital signs data (n = 350), the automatically-computed scores' AUC's were the same as the medic-based scores' AUC's. Using the Prehospital Index in place of RTS led to very similar results, corroborating our findings. SQI algorithms improve automatically-computed severity scores, and automatically-computed scores using SQI's are equivalent to medic-based scores.

  8. Test aspects of the JPL Viterbi decoder

    NASA Technical Reports Server (NTRS)

    Breuer, M. A.

    1989-01-01

    The generation of test vectors and design-for-test aspects of the Jet Propulsion Laboratory (JPL) Very Large Scale Integration (VLSI) Viterbi decoder chip is discussed. Each processor integrated circuit (IC) contains over 20,000 gates. To achieve a high degree of testability, a scan architecture is employed. The logic has been partitioned so that very few test vectors are required to test the entire chip. In addition, since several blocks of logic are replicated numerous times on this chip, test vectors need only be generated for each block, rather than for the entire circuit. These unique blocks of logic have been identified and test sets generated for them. The approach employed for testing was to use pseudo-exhaustive test vectors whenever feasible. That is, each cone of logid is tested exhaustively. Using this approach, no detailed logic design or fault model is required. All faults which modify the function of a block of combinational logic are detected, such as all irredundant single and multiple stuck-at faults.

  9. Scheduling in Sensor Grid Middleware for Telemedicine Using ABC Algorithm

    PubMed Central

    Vigneswari, T.; Mohamed, M. A. Maluk

    2014-01-01

    Advances in microelectromechanical systems (MEMS) and nanotechnology have enabled design of low power wireless sensor nodes capable of sensing different vital signs in our body. These nodes can communicate with each other to aggregate data and transmit vital parameters to a base station (BS). The data collected in the base station can be used to monitor health in real time. The patient wearing sensors may be mobile leading to aggregation of data from different BS for processing. Processing real time data is compute-intensive and telemedicine facilities may not have appropriate hardware to process the real time data effectively. To overcome this, sensor grid has been proposed in literature wherein sensor data is integrated to the grid for processing. This work proposes a scheduling algorithm to efficiently process telemedicine data in the grid. The proposed algorithm uses the popular swarm intelligence algorithm for scheduling to overcome the NP complete problem of grid scheduling. Results compared with other heuristic scheduling algorithms show the effectiveness of the proposed algorithm. PMID:25548557

  10. Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment

    DTIC Science & Technology

    2011-02-01

    code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise

  11. Sensitivity and specificity of administrative mortality data for identifying prescription opioid–related deaths

    PubMed Central

    Gladstone, Emilie; Smolina, Kate; Morgan, Steven G.; Fernandes, Kimberly A.; Martins, Diana; Gomes, Tara

    2016-01-01

    Background: Comprehensive systems for surveilling prescription opioid–related harms provide clear evidence that deaths from prescription opioids have increased dramatically in the United States. However, these harms are not systematically monitored in Canada. In light of a growing public health crisis, accessible, nationwide data sources to examine prescription opioid–related harms in Canada are needed. We sought to examine the performance of 5 algorithms to identify prescription opioid–related deaths from vital statistics data against data abstracted from the Office of the Chief Coroner of Ontario as a gold standard. Methods: We identified all prescription opioid–related deaths from Ontario coroners’ data that occurred between Jan. 31, 2003, and Dec. 31, 2010. We then used 5 different algorithms to identify prescription opioid–related deaths from vital statistics death data in 2010. We selected the algorithm with the highest sensitivity and a positive predictive value of more than 80% as the optimal algorithm for identifying prescription opioid–related deaths. Results: Four of the 5 algorithms had positive predictive values of more than 80%. The algorithm with the highest sensitivity (75%) in 2010 improved slightly in its predictive performance from 2003 to 2010. Interpretation: In the absence of specific systems for monitoring prescription opioid–related deaths in Canada, readily available national vital statistics data can be used to study prescription opioid–related mortality with considerable accuracy. Despite some limitations, these data may facilitate the implementation of national surveillance and monitoring strategies. PMID:26622006

  12. Sensitivity and specificity of administrative mortality data for identifying prescription opioid-related deaths.

    PubMed

    Gladstone, Emilie; Smolina, Kate; Morgan, Steven G; Fernandes, Kimberly A; Martins, Diana; Gomes, Tara

    2016-03-01

    Comprehensive systems for surveilling prescription opioid-related harms provide clear evidence that deaths from prescription opioids have increased dramatically in the United States. However, these harms are not systematically monitored in Canada. In light of a growing public health crisis, accessible, nationwide data sources to examine prescription opioid-related harms in Canada are needed. We sought to examine the performance of 5 algorithms to identify prescription opioid-related deaths from vital statistics data against data abstracted from the Office of the Chief Coroner of Ontario as a gold standard. We identified all prescription opioid-related deaths from Ontario coroners' data that occurred between Jan. 31, 2003, and Dec. 31, 2010. We then used 5 different algorithms to identify prescription opioid-related deaths from vital statistics death data in 2010. We selected the algorithm with the highest sensitivity and a positive predictive value of more than 80% as the optimal algorithm for identifying prescription opioid-related deaths. Four of the 5 algorithms had positive predictive values of more than 80%. The algorithm with the highest sensitivity (75%) in 2010 improved slightly in its predictive performance from 2003 to 2010. In the absence of specific systems for monitoring prescription opioid-related deaths in Canada, readily available national vital statistics data can be used to study prescription opioid-related mortality with considerable accuracy. Despite some limitations, these data may facilitate the implementation of national surveillance and monitoring strategies. © 2016 Canadian Medical Association or its licensors.

  13. A low-complexity and high performance concatenated coding scheme for high-speed satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Rhee, Dojun; Rajpal, Sandeep

    1993-01-01

    This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.

  14. Development and Validation of a Machine Learning Algorithm and Hybrid System to Predict the Need for Life-Saving Interventions in Trauma Patients

    DTIC Science & Technology

    2014-01-01

    were stored at a rate of 1 Hz. In addition, ECg waveform data from a single lead and pleth waveform data from a thumb-mounted pulse oximeter to the...blood oxygenation (SpO2). Combinations of these vital signs were also used to derive other measurements including shock index (SI = Hr/SBP) and pulse ...combining all vital signs, trends, and pulse characteristics recorded by the monitor, and apply- ing a multivariate sensor fusion algorithm that generates

  15. Vital Sign Monitoring and Mobile Phone Usage Detection Using IR-UWB Radar for Intended Use in Car Crash Prevention.

    PubMed

    Leem, Seong Kyu; Khan, Faheem; Cho, Sung Ho

    2017-05-30

    In order to avoid car crashes, active safety systems are becoming more and more important. Many crashes are caused due to driver drowsiness or mobile phone usage. Detecting the drowsiness of the driver is very important for the safety of a car. Monitoring of vital signs such as respiration rate and heart rate is important to determine the occurrence of driver drowsiness. In this paper, robust vital signs monitoring through impulse radio ultra-wideband (IR-UWB) radar is discussed. We propose a new algorithm that can estimate the vital signs even if there is motion caused by the driving activities. We analyzed the whole fast time vital detection region and found the signals at those fast time locations that have useful information related to the vital signals. We segmented those signals into sub-signals and then constructed the desired vital signal using the correlation method. In this way, the vital signs of the driver can be monitored noninvasively, which can be used by researchers to detect the drowsiness of the driver which is related to the vital signs i.e., respiration and heart rate. In addition, texting on a mobile phone during driving may cause visual, manual or cognitive distraction of the driver. In order to reduce accidents caused by a distracted driver, we proposed an algorithm that can detect perfectly a driver's mobile phone usage even if there are various motions of the driver in the car or changes in background objects. These novel techniques, which monitor vital signs associated with drowsiness and detect phone usage before a driver makes a mistake, may be very helpful in developing techniques for preventing a car crash.

  16. Vital Sign Monitoring and Mobile Phone Usage Detection Using IR-UWB Radar for Intended Use in Car Crash Prevention

    PubMed Central

    Leem, Seong Kyu; Khan, Faheem; Cho, Sung Ho

    2017-01-01

    In order to avoid car crashes, active safety systems are becoming more and more important. Many crashes are caused due to driver drowsiness or mobile phone usage. Detecting the drowsiness of the driver is very important for the safety of a car. Monitoring of vital signs such as respiration rate and heart rate is important to determine the occurrence of driver drowsiness. In this paper, robust vital signs monitoring through impulse radio ultra-wideband (IR-UWB) radar is discussed. We propose a new algorithm that can estimate the vital signs even if there is motion caused by the driving activities. We analyzed the whole fast time vital detection region and found the signals at those fast time locations that have useful information related to the vital signals. We segmented those signals into sub-signals and then constructed the desired vital signal using the correlation method. In this way, the vital signs of the driver can be monitored noninvasively, which can be used by researchers to detect the drowsiness of the driver which is related to the vital signs i.e., respiration and heart rate. In addition, texting on a mobile phone during driving may cause visual, manual or cognitive distraction of the driver. In order to reduce accidents caused by a distracted driver, we proposed an algorithm that can detect perfectly a driver's mobile phone usage even if there are various motions of the driver in the car or changes in background objects. These novel techniques, which monitor vital signs associated with drowsiness and detect phone usage before a driver makes a mistake, may be very helpful in developing techniques for preventing a car crash. PMID:28556818

  17. Statewide real-time in-flight trauma patient vital signs collection system.

    PubMed

    Hu, Peter F; Mackenzie, Colin; Dutton, Richard; Sen, Ayan; Xiao, Yan; Handley, Christopher; Ho, Danny; Scalea, Thomas

    2008-11-06

    Continuous recorded in-flight vital signs monitoring and life-saving interventions linked to outcomes may provide better understanding of pre-hospital triage, care management and patient responses during the 'golden hour' of trauma care. Evaluation of 157 patients' vital signs data collected from our statewide network has identified episodes of physiological decompensation which holds promise for creation of new triage algorithms and enhanced trauma center preparedness.

  18. Coding performance of the Probe-Orbiter-Earth communication link

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Dolinar, S.; Pollara, F.

    1993-01-01

    The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.

  19. The power of neural nets

    NASA Technical Reports Server (NTRS)

    Ryan, J. P.; Shah, B. H.

    1987-01-01

    Implementation of the Hopfield net which is used in the image processing type of applications where only partial information about the image may be available is discussed. The image classification type of algorithm of Hopfield and other learning algorithms, such as the Boltzmann machine and the back-propagation training algorithm, have many vital applications in space.

  20. Is Respiration-Induced Variation in the Photoplethysmogram Associated with Major Hypovolemia in Patients with Acute Traumatic Injuries?

    DTIC Science & Technology

    2010-11-01

    hypovolemia in the prehospital environment. Photoplethysmogram waveforms and basic vital signs were recorded in trauma patients during prehospital...transport. Retrospectively, we used automated algorithms to select patient records with all five basic vital signs and 45 s or longer continuous, clean PPG... basic vital signs by applying multivariate regression. In 344 patients, RIWV max-min yielded areas under the ROC curves (AUCs) not significantly better

  1. Visualization of Whole-Night Sleep EEG From 2-Channel Mobile Recording Device Reveals Distinct Deep Sleep Stages with Differential Electrodermal Activity.

    PubMed

    Onton, Julie A; Kang, Dae Y; Coleman, Todd P

    2016-01-01

    Brain activity during sleep is a powerful marker of overall health, but sleep lab testing is prohibitively expensive and only indicated for major sleep disorders. This report demonstrates that mobile 2-channel in-home electroencephalogram (EEG) recording devices provided sufficient information to detect and visualize sleep EEG. Displaying whole-night sleep EEG in a spectral display allowed for quick assessment of general sleep stability, cycle lengths, stage lengths, dominant frequencies and other indices of sleep quality. By visualizing spectral data down to 0.1 Hz, a differentiation emerged between slow-wave sleep with dominant frequency between 0.1-1 Hz or 1-3 Hz, but rarely both. Thus, we present here the new designations, Hi and Lo Deep sleep, according to the frequency range with dominant power. Simultaneously recorded electrodermal activity (EDA) was primarily associated with Lo Deep and very rarely with Hi Deep or any other stage. Therefore, Hi and Lo Deep sleep appear to be physiologically distinct states that may serve unique functions during sleep. We developed an algorithm to classify five stages (Awake, Light, Hi Deep, Lo Deep and rapid eye movement (REM)) using a Hidden Markov Model (HMM), model fitting with the expectation-maximization (EM) algorithm, and estimation of the most likely sleep state sequence by the Viterbi algorithm. The resulting automatically generated sleep hypnogram can help clinicians interpret the spectral display and help researchers computationally quantify sleep stages across participants. In conclusion, this study demonstrates the feasibility of in-home sleep EEG collection, a rapid and informative sleep report format, and novel deep sleep designations accounting for spectral and physiological differences.

  2. Algorithmic tools for interpreting vital signs.

    PubMed

    Rathbun, Melina C; Ruth-Sahd, Lisa A

    2009-07-01

    Today's complex world of nursing practice challenges nurse educators to develop teaching methods that promote critical thinking skills and foster quick problem solving in the novice nurse. Traditional pedagogies previously used in the classroom and clinical setting are no longer adequate to prepare nursing students for entry into practice. In addition, educators have expressed frustration when encouraging students to apply newly learned theoretical content to direct the care of assigned patients in the clinical setting. This article presents algorithms as an innovative teaching strategy to guide novice student nurses in the interpretation and decision making related to vital sign assessment in an acute care setting.

  3. VLSI single-chip (255,223) Reed-Solomon encoder with interleaver

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)

    1990-01-01

    The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.

  4. Viterbi Tracking of Randomly Phase-Modulated Data (and Related Topics).

    DTIC Science & Technology

    1982-08-10

    odd (#I,, P2 ," • Denote the conditional probability mass 46, ) function of 0k, given Ak, by p(Ok/Ak). For the (4, 4) diagram of Fig. 2(d), i, j even...Professor Electrical Engineering LS:fr I. II Recent (Jutstandinq Acca plisuneil: / - -017(, July 12, 1982 The problem of FM divdulation has a long hi.;try (f

  5. System Design for FEC in Aeronautical Telemetry

    DTIC Science & Technology

    2012-03-12

    rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code

  6. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  7. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  8. Use of a novel electronic maternal surveillance system to generate automated alerts on the labor and delivery unit.

    PubMed

    Klumpner, Thomas T; Kountanis, Joanna A; Langen, Elizabeth S; Smith, Roger D; Tremper, Kevin K

    2018-06-26

    Maternal early warning systems reduce maternal morbidity. We developed an electronic maternal surveillance system capable of visually summarizing the labor and delivery census and identifying changes in clinical status. Automatic page alerts to clinical providers, using an algorithm developed at our institution, were incorporated in an effort to improve early detection of maternal morbidity. We report the frequency of pages generated by the system. To our knowledge, this is the first time such a system has been used in peripartum care. Alert criteria were developed after review of maternal early warning systems, including the Maternal Early Warning Criteria (MEWC). Careful consideration was given to the frequency of pages generated by the surveillance system. MEWC notification criteria were liberalized and a paging algorithm was created that triggered paging alerts to first responders (nurses) and then managing services due to the assumption that paging all clinicians for each vital sign triggering MEWC would generate an inordinate number of pages. For preliminary analysis, to determine the effect of our automated paging algorithm on alerting frequency, the paging frequency of this system was compared to the frequency of vital signs meeting the Maternal Early Warning Criteria (MEWC). This retrospective analysis was limited to a sample of 34 patient rooms uniquely capable of storing every vital sign reported by the bedside monitor. Over a 91-day period, from April 1 to July 1, 2017, surveillance was conducted from 64 monitored beds, and the obstetrics service received one automated page every 2.3 h. The most common triggers for alerts were for hypertension and tachycardia. For the subset of 34 patient rooms uniquely capable of real-time recording, one vital sign met the MEWC every 9.6 to 10.3 min. Anecdotally, the system was well-received. This novel electronic maternal surveillance system is designed to reduce cognitive bias and improve timely clinical recognition of maternal deterioration. The automated paging algorithm developed for this software dramatically reduces paging frequency compared to paging for isolated vital sign abnormalities alone. Long-term, prospective studies will be required to determine its impact on patient outcomes.

  9. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.

  10. Trellis-coded CPM for satellite-based mobile communications

    NASA Technical Reports Server (NTRS)

    Abrishamkar, Farrokh; Biglieri, Ezio

    1988-01-01

    Digital transmission for satellite-based land mobile communications is discussed. To satisfy the power and bandwidth limitations imposed on such systems, a combination of trellis coding and continuous-phase modulated signals are considered. Some schemes based on this idea are presented, and their performance is analyzed by computer simulation. The results obtained show that a scheme based on directional detection and Viterbi decoding appears promising for practical applications.

  11. Swarming Reconnaissance Using Unmanned Aerial Vehicles in a Parallel Discrete Event Simulation

    DTIC Science & Technology

    2004-03-01

    60 4.3.1.4 Data Distribution Management . . . . . . . . . 60 4.3.1.5 Breathing Time Warp Algorithm/ Rolling Back . 61...58 BTW Breathing Time Warp . . . . . . . . . . . . . . . . . . . . . . . . . 59 DDM Data Distribution Management . . . . . . . . . . . . . . . . . . . . 60...events based on the 58 process algorithm. Data proxies/ distribution management is the vital portion of the SPEEDES im- plementation that allows objects

  12. Wearable, multimodal, vitals acquisition unit for intelligent field triage.

    PubMed

    Beck, Christoph; Georgiou, Julius

    2016-09-01

    In this Letter, the authors describe the characterisation design and development of the authors' wearable, multimodal vitals acquisition unit for intelligent field triage. The unit is able to record the standard electrocardiogram, blood oxygen and body temperature parameters and also has the unique capability to record up to eight custom designed acoustic streams for heart and lung sound auscultation. These acquisition channels are highly synchronised to fully maintain the time correlation of the signals. The unit is a key component enabling systematic and intelligent field triage to continuously acquire vital patient information. With the realised unit a novel data-set with highly synchronised vital signs was recorded. The new data-set may be used for algorithm design in vital sign analysis or decision making. The monitoring unit is the only known body worn system that records standard emergency parameters plus eight multi-channel auscultatory streams and stores the recordings and wirelessly transmits them to mobile response teams.

  13. Classification of voting algorithms for N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    A voting algorithm in N-version software is a crucial component that evaluates the execution of each of the N versions and determines the correct result. Obviously, the result of the voting algorithm determines the outcome of the N-version software in general. Thus, the choice of the voting algorithm is a vital issue. A lot of voting algorithms were already developed and they may be selected for implementation based on the specifics of the analysis of input data. However, the voting algorithms applied in N-version software are not classified. This article presents an overview of classic and recent voting algorithms used in N-version software and the authors' classification of the voting algorithms. Moreover, the steps of the voting algorithms are presented and the distinctive features of the voting algorithms in Nversion software are defined.

  14. Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.

    PubMed

    Ifrim, Sandra

    2015-12-01

    The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.

  15. NASA Tech Briefs, July 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Real-Time, High-Frequency QRS Electrocardiograph; Software for Improved Extraction of Data From Tape Storage; Radio System for Locating Emergency Workers; Software for Displaying High-Frequency Test Data; Capacitor-Chain Successive-Approximation ADC; Simpler Alternative to an Optimum FQPSK-B Viterbi Receiver; Multilayer Patch Antenna Surrounded by a Metallic Wall; Software To Secure Distributed Propulsion Simulations; Explicit Pore Pressure Material Model in Carbon-Cloth Phenolic; Meshed-Pumpkin Super-Pressure Balloon Design; Corrosion Inhibitors as Penetrant Dyes for Radiography; Transparent Metal-Salt-Filled Polymeric Radiation Shields; Lightweight Energy Absorbers for Blast Containers; Brush-Wheel Samplers for Planetary Exploration; Dry Process for Making Polyimide/ Carbon-and-Boron-Fiber Tape; Relatively Inexpensive Rapid Prototyping of Small Parts; Magnetic Field Would Reduce Electron Backstreaming in Ion Thrusters; Alternative Electrochemical Systems for Ozonation of Water; Interferometer for Measuring Displacement to Within 20 pm; UV-Enhanced IR Raman System for Identifying Biohazards; Prognostics Methodology for Complex Systems; Algorithms for Haptic Rendering of 3D Objects; Modeling and Control of Aerothermoelastic Effects; Processing Digital Imagery to Enhance Perceptions of Realism; Analysis of Designs of Space Laboratories; Shields for Enhanced Protection Against High-Speed Debris; Study of Dislocation-Ordered In(x)Ga(1-x)As/GaAs Quantum Dots; and Tilt-Sensitivity Analysis for Space Telescopes.

  16. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  17. A robust human face detection algorithm

    NASA Astrophysics Data System (ADS)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  18. An adaptive Kalman filter technique for context-aware heart rate monitoring.

    PubMed

    Xu, Min; Goldfain, Albert; Dellostritto, Jim; Iyengar, Satish

    2012-01-01

    Traditional physiological monitoring systems convert a person's vital sign waveforms, such as heart rate, respiration rate and blood pressure, into meaningful information by comparing the instant reading with a preset threshold or a baseline without considering the contextual information of the person. It would be beneficial to incorporate the contextual data such as activity status of the person to the physiological data in order to obtain a more accurate representation of a person's physiological status. In this paper, we proposed an algorithm based on adaptive Kalman filter that describes the heart rate response with respect to different activity levels. It is towards our final goal of intelligent detection of any abnormality in the person's vital signs. Experimental results are provided to demonstrate the feasibility of the algorithm.

  19. Real-time test of MOCS algorithm during Superflux 1980. [ocean color algorithm for remotely detecting suspended solids

    NASA Technical Reports Server (NTRS)

    Grew, G. W.

    1981-01-01

    A remote sensing experiment was conducted in which success depended upon the real-time use of an algorithm, generated from MOCS (multichannel ocean color sensor) data onboard the NASA P-3 aircraft, to direct the NOAA ship Kelez to oceanic stations where vitally needed sea truth could be collected. Remote data sets collected on two consecutive days of the mission were consistent with the sea truth for low concentrations of chlorophyll a. Two oceanic regions of special interest were located. The algorithm and the collected data are described.

  20. Full Duplex, Spread Spectrum Radio System

    NASA Technical Reports Server (NTRS)

    Harvey, Bruce A.

    2000-01-01

    The goal of this project was to support the development of a full duplex, spread spectrum voice communications system. The assembly and testing of a prototype system consisting of a Harris PRISM spread spectrum radio, a TMS320C54x signal processing development board and a Zilog Z80180 microprocessor was underway at the start of this project. The efforts under this project were the development of multiple access schemes, analysis of full duplex voice feedback delays, and the development and analysis of forward error correction (FEC) algorithms. The multiple access analysis involved the selection between code division multiple access (CDMA), frequency division multiple access (FDMA) and time division multiple access (TDMA). Full duplex voice feedback analysis involved the analysis of packet size and delays associated with full loop voice feedback for confirmation of radio system performance. FEC analysis included studies of the performance under the expected burst error scenario with the relatively short packet lengths, and analysis of implementation in the TMS320C54x digital signal processor. When the capabilities and the limitations of the components used were considered, the multiple access scheme chosen was a combination TDMA/FDMA scheme that will provide up to eight users on each of three separate frequencies. Packets to and from each user will consist of 16 samples at a rate of 8,000 samples per second for a total of 2 ms of voice information. The resulting voice feedback delay will therefore be 4 - 6 ms. The most practical FEC algorithm for implementation was a convolutional code with a Viterbi decoder. Interleaving of the bits of each packet will be required to offset the effects of burst errors.

  1. Promoting Early Diagnosis of Hemodynamic Instability during Simulated Hemorrhage with the Use of a Real-time Decision-assist Algorithm

    DTIC Science & Technology

    2013-01-01

    vilian trauma systems and in military casualty care rely on standard vital signs (blood pressure, arterial oxygen saturation , heart rate [HR...acting to maintain blood pressure and arterial oxygen saturation (i.e., standard vital signs are not changing) in the presence of re- duced...assessments in austere environments. Profiles of changes in mean arterial pressure (MAP), cardiac output, and venous oxygen saturation during LBNP have been

  2. Record review to explore the adequacy of post-operative vital signs monitoring using a local modified early warning score (mews) chart to evaluate outcomes.

    PubMed

    Kyriacos, Una; Jelsma, Jennifer; Jordan, Sue

    2014-01-01

    1) To explore the adequacy of: vital signs' recordings (respiratory and heart rate, oxygen saturation, systolic blood pressure (BP), temperature, level of consciousness and urine output) in the first 8 post-operative hours; responses to clinical deterioration. 2) To identify factors associated with death on the ward between transfer from the theatre recovery suite and the seventh day after operation. Retrospective review of records of 11 patients who died plus four controls for each case. We reviewed clinical records of 55 patients who met inclusion criteria (general anaesthetic, age >13, complete records) from six surgical wards in a teaching hospital between 1 May and 31 July 2009. In the absence of guidelines for routine post-operative vital signs' monitoring, nurses' standard practice graphical plots of recordings were recoded into MEWS formats (0 = normal, 1-3 upper or lower limit) and their responses to clinical deterioration were interpreted using MEWS reporting algorithms. No patients' records contained recordings for all seven parameters displayed on the MEWS. There was no evidence of response to: 22/36 (61.1%) abnormal vital signs for patients who died that would have triggered an escalated MEWS reporting algorithm; 81/87 (93.1%) for controls. Death was associated with age, ≥61 years (OR 14.2, 3.0-68.0); ≥2 pre-existing co-morbidities (OR 75.3, 3.7-1527.4); high/low systolic BP on admission (OR 7.2, 1.5-34.2); tachycardia (≥111-129 bpm) (OR 6.6, 1.4-30.0) and low systolic BP (≤81-100 mmHg), as defined by the MEWS (OR 8.0, 1.9-33.1). Guidelines for post-operative vital signs' monitoring and reporting need to be established. The MEWS provides a useful scoring system for interpreting clinical deterioration and guiding intervention. Exploration of the ability of the Cape Town MEWS chart plus reporting algorithm to expedite recognition of signs of clinical and physiological deterioration and securing more skilled assistance is essential.

  3. Wearable, multimodal, vitals acquisition unit for intelligent field triage

    PubMed Central

    Georgiou, Julius

    2016-01-01

    In this Letter, the authors describe the characterisation design and development of the authors’ wearable, multimodal vitals acquisition unit for intelligent field triage. The unit is able to record the standard electrocardiogram, blood oxygen and body temperature parameters and also has the unique capability to record up to eight custom designed acoustic streams for heart and lung sound auscultation. These acquisition channels are highly synchronised to fully maintain the time correlation of the signals. The unit is a key component enabling systematic and intelligent field triage to continuously acquire vital patient information. With the realised unit a novel data-set with highly synchronised vital signs was recorded. The new data-set may be used for algorithm design in vital sign analysis or decision making. The monitoring unit is the only known body worn system that records standard emergency parameters plus eight multi-channel auscultatory streams and stores the recordings and wirelessly transmits them to mobile response teams. PMID:27733926

  4. [A diagnostic algorithm and treatment procedure in disordered vital functions in newborns admitted to a resuscitation ward].

    PubMed

    Ostreĭkov, I F; Podkopaev, V N; Moiseev, D B; Karpysheva, E V; Markova, L A; Sizov, S V

    1997-01-01

    Total mortality decreased by 2.5 times in the wards for intensive care of the newborns in the Tushino Pediatric Hospital in 1996 and is now 7.6%. Such results are due to a complex of measures, one such measure being the development and introduction of an algorithm for the diagnosis and treatment of newborns hospitalized in intensive care wards. The algorithm facilitates the work of the staff, helps earlier diagnose a disease, and, hence, carry out timely scientifically based therapy.

  5. A high performance load balance strategy for real-time multicore systems.

    PubMed

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper.

  6. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  7. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  8. Coded spread spectrum digital transmission system design study

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.

    1974-01-01

    Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.

  9. Bio-Inspired Genetic Algorithms with Formalized Crossover Operators for Robotic Applications.

    PubMed

    Zhang, Jie; Kang, Man; Li, Xiaojuan; Liu, Geng-Yang

    2017-01-01

    Genetic algorithms are widely adopted to solve optimization problems in robotic applications. In such safety-critical systems, it is vitally important to formally prove the correctness when genetic algorithms are applied. This paper focuses on formal modeling of crossover operations that are one of most important operations in genetic algorithms. Specially, we for the first time formalize crossover operations with higher-order logic based on HOL4 that is easy to be deployed with its user-friendly programing environment. With correctness-guaranteed formalized crossover operations, we can safely apply them in robotic applications. We implement our technique to solve a path planning problem using a genetic algorithm with our formalized crossover operations, and the results show the effectiveness of our technique.

  10. Multicentre validation of a sepsis prediction algorithm using only vital sign data in the emergency department, general ward and ICU

    PubMed Central

    Mao, Qingqing; Jay, Melissa; Calvert, Jacob; Barton, Christopher; Shimabukuro, David; Shieh, Lisa; Chettipally, Uli; Fletcher, Grant; Kerem, Yaniv; Zhou, Yifan; Das, Ritankar

    2018-01-01

    Objectives We validate a machine learning-based sepsis-prediction algorithm (InSight) for the detection and prediction of three sepsis-related gold standards, using only six vital signs. We evaluate robustness to missing data, customisation to site-specific data using transfer learning and generalisability to new settings. Design A machine-learning algorithm with gradient tree boosting. Features for prediction were created from combinations of six vital sign measurements and their changes over time. Setting A mixed-ward retrospective dataset from the University of California, San Francisco (UCSF) Medical Center (San Francisco, California, USA) as the primary source, an intensive care unit dataset from the Beth Israel Deaconess Medical Center (Boston, Massachusetts, USA) as a transfer-learning source and four additional institutions’ datasets to evaluate generalisability. Participants 684 443 total encounters, with 90 353 encounters from June 2011 to March 2016 at UCSF. Interventions None. Primary and secondary outcome measures Area under the receiver operating characteristic (AUROC) curve for detection and prediction of sepsis, severe sepsis and septic shock. Results For detection of sepsis and severe sepsis, InSight achieves an AUROC curve of 0.92 (95% CI 0.90 to 0.93) and 0.87 (95% CI 0.86 to 0.88), respectively. Four hours before onset, InSight predicts septic shock with an AUROC of 0.96 (95% CI 0.94 to 0.98) and severe sepsis with an AUROC of 0.85 (95% CI 0.79 to 0.91). Conclusions InSight outperforms existing sepsis scoring systems in identifying and predicting sepsis, severe sepsis and septic shock. This is the first sepsis screening system to exceed an AUROC of 0.90 using only vital sign inputs. InSight is robust to missing data, can be customised to novel hospital data using a small fraction of site data and retains strong discrimination across all institutions. PMID:29374661

  11. Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1995-01-01

    During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.

  12. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  13. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  14. A Platform for Real-time Acquisition and Analysis of Physiological Data in Hospital Emergency Departments

    DTIC Science & Technology

    2014-08-01

    with the Department of Emergency Medicine, Massachusetts General Hospital, Boston, MA 02114 USA (corresponding author; phone: 617 -726-2241; e-mail...programming interface ( API ). Algorithms are used to determine the reliability of waveform (e.g., electrocardiogram) and vital-sign data (e.g., heart rate...and comparing of real-time decision- support algorithms in mobile environments," Conf Proc IEEE Eng Med Biol Soc, vol. 2009 , pp. 3417-20, 2009 . [3

  15. An investigation of error correcting techniques for OMV data

    NASA Technical Reports Server (NTRS)

    Ingels, Frank; Fryer, John

    1992-01-01

    Papers on the following topics are presented: considerations of testing the Orbital Maneuvering Vehicle (OMV) system with CLASS; OMV CLASS test results (first go around); equivalent system gain available from R-S encoding versus a desire to lower the power amplifier from 25 watts to 20 watts for OMV; command word acceptance/rejection rates for OMV; a memo concerning energy-to-noise ratio for the Viterbi-BSC Channel and the impact of Manchester coding loss; and an investigation of error correcting techniques for OMV and Advanced X-ray Astrophysics Facility (AXAF).

  16. A small terminal for satellite communication systems

    NASA Technical Reports Server (NTRS)

    Xiong, Fuqin; Wu, Dong; Jin, Min

    1994-01-01

    A small portable, low-cost satellite communications terminal system incorporating a modulator/demodulator and convolutional-Viterbi coder/decoder is described. Advances in signal processing and error-correction techniques in combination with higher power and higher frequencies aboard satellites allow for more efficient use of the space segment. This makes it possible to design small economical earth stations. The Advanced Communications Technology Satellite (ACTS) was chosen to test the system. ACTS, operating at the Ka band incorporates higher power, higher frequency, frequency and spatial reuse using spot beams and polarization.

  17. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  18. Coding for spread spectrum packet radios

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1980-01-01

    Packet radios are often expected to operate in a radio communication network environment where there tends to be man made interference signals. To combat such interference, spread spectrum waveforms are being considered for some applications. The use of convolutional coding with Viterbi decoding to further improve the performance of spread spectrum packet radios is examined. At 0.00001 bit error rates, improvements in performance of 4 db to 5 db can easily be achieved with such coding without any change in data rate nor spread spectrum bandwidth. This coding gain is more dramatic in an interference environment.

  19. GPU-accelerated phase extraction algorithm for interferograms: a real-time application

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei

    2016-11-01

    Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.

  20. Can we improve the clinical utility of respiratory rate as a monitored vital sign?

    PubMed

    Chen, Liangyou; Reisner, Andrew T; Gribok, Andrei; McKenna, Thomas M; Reifman, Jaques

    2009-06-01

    Respiratory rate (RR) is a basic vital sign, measured and monitored throughout a wide spectrum of health care settings, although RR is historically difficult to measure in a reliable fashion. We explore an automated method that computes RR only during intervals of clean, regular, and consistent respiration and investigate its diagnostic use in a retrospective analysis of prehospital trauma casualties. At least 5 s of basic vital signs, including heart rate, RR, and systolic, diastolic, and mean arterial blood pressures, were continuously collected from 326 spontaneously breathing trauma casualties during helicopter transport to a level I trauma center. "Reliable" RR data were identified retrospectively using automated algorithms. The diagnostic performances of reliable versus standard RR were evaluated by calculation of the receiver operating characteristic curves using the maximum-likelihood method and comparison of the summary areas under the receiver operating characteristic curves (AUCs). Respiratory rate shows significant data-reliability differences. For identifying prehospital casualties who subsequently receive a respiratory intervention (hospital intubation or tube thoracotomy), standard RR yields an AUC of 0.59 (95% confidence interval, 0.48-0.69), whereas reliable RR yields an AUC of 0.67 (0.57-0.77), P < 0.05. For identifying casualties subsequently diagnosed with a major hemorrhagic injury and requiring blood transfusion, standard RR yields an AUC of 0.60 (0.49-0.70), whereas reliable RR yields 0.77 (0.67-0.85), P < 0.001. Reliable RR, as determined by an automated algorithm, is a useful parameter for the diagnosis of respiratory pathology and major hemorrhage in a trauma population. It may be a useful input to a wide variety of clinical scores and automated decision-support algorithms.

  1. Using Supervised Machine Learning to Classify Real Alerts and Artifact in Online Multisignal Vital Sign Monitoring Data.

    PubMed

    Chen, Lujie; Dubrawski, Artur; Wang, Donghan; Fiterau, Madalina; Guillame-Bert, Mathieu; Bose, Eliezer; Kaynar, Ata M; Wallace, David J; Guttendorf, Jane; Clermont, Gilles; Pinsky, Michael R; Hravnak, Marilyn

    2016-07-01

    The use of machine-learning algorithms to classify alerts as real or artifacts in online noninvasive vital sign data streams to reduce alarm fatigue and missed true instability. Observational cohort study. Twenty-four-bed trauma step-down unit. Two thousand one hundred fifty-three patients. Noninvasive vital sign monitoring data (heart rate, respiratory rate, peripheral oximetry) recorded on all admissions at 1/20 Hz, and noninvasive blood pressure less frequently, and partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were vital sign deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained machine-learning algorithms. The best model was evaluated on test set alerts to enact online alert classification over time. The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve performance of 0.79 (95% CI, 0.67-0.93) for peripheral oximetry at the instant the vital sign first crossed threshold and increased to 0.87 (95% CI, 0.71-0.95) at 3 minutes into the alerting period. Blood pressure area under the curve started at 0.77 (95% CI, 0.64-0.95) and increased to 0.87 (95% CI, 0.71-0.98), whereas respiratory rate area under the curve started at 0.85 (95% CI, 0.77-0.95) and increased to 0.97 (95% CI, 0.94-1.00). Heart rate alerts were too few for model development. Machine-learning models can discern clinically relevant peripheral oximetry, blood pressure, and respiratory rate alerts from artifacts in an online monitoring dataset (area under the curve > 0.87).

  2. Correcting the Count: Improving Vital Statistics Data Regarding Deaths Related to Obesity.

    PubMed

    McCleskey, Brandi C; Davis, Gregory G; Dye, Daniel W

    2017-11-15

    Obesity can involve any organ system and compromise the overall health of an individual, including premature death. Despite the increased risk of death associated with being obese, obesity itself is infrequently indicated on the death certificate. We performed an audit of our records to identify how often "obesity" was listed on the death certificate to determine how our practices affected national mortality data collection regarding obesity-related mortality. During the span of nearly 25 years, 0.2% of deaths were attributed to or contributed by obesity. Over the course of 5 years, 96% of selected natural deaths were likely underreported as being associated with obesity. We present an algorithm for certifiers to use to determine whether obesity should be listed on the death certificate and guidelines for certifying cases in which this is appropriate. Use of this algorithm will improve vital statistics concerning the role of obesity in causing or contributing to death. © 2017 American Academy of Forensic Sciences.

  3. Multicentre validation of a sepsis prediction algorithm using only vital sign data in the emergency department, general ward and ICU.

    PubMed

    Mao, Qingqing; Jay, Melissa; Hoffman, Jana L; Calvert, Jacob; Barton, Christopher; Shimabukuro, David; Shieh, Lisa; Chettipally, Uli; Fletcher, Grant; Kerem, Yaniv; Zhou, Yifan; Das, Ritankar

    2018-01-26

    We validate a machine learning-based sepsis-prediction algorithm ( InSight ) for the detection and prediction of three sepsis-related gold standards, using only six vital signs. We evaluate robustness to missing data, customisation to site-specific data using transfer learning and generalisability to new settings. A machine-learning algorithm with gradient tree boosting. Features for prediction were created from combinations of six vital sign measurements and their changes over time. A mixed-ward retrospective dataset from the University of California, San Francisco (UCSF) Medical Center (San Francisco, California, USA) as the primary source, an intensive care unit dataset from the Beth Israel Deaconess Medical Center (Boston, Massachusetts, USA) as a transfer-learning source and four additional institutions' datasets to evaluate generalisability. 684 443 total encounters, with 90 353 encounters from June 2011 to March 2016 at UCSF. None. Area under the receiver operating characteristic (AUROC) curve for detection and prediction of sepsis, severe sepsis and septic shock. For detection of sepsis and severe sepsis, InSight achieves an AUROC curve of 0.92 (95% CI 0.90 to 0.93) and 0.87 (95% CI 0.86 to 0.88), respectively. Four hours before onset, InSight predicts septic shock with an AUROC of 0.96 (95% CI 0.94 to 0.98) and severe sepsis with an AUROC of 0.85 (95% CI 0.79 to 0.91). InSight outperforms existing sepsis scoring systems in identifying and predicting sepsis, severe sepsis and septic shock. This is the first sepsis screening system to exceed an AUROC of 0.90 using only vital sign inputs. InSight is robust to missing data, can be customised to novel hospital data using a small fraction of site data and retains strong discrimination across all institutions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Detection of low-volume blood loss: compensatory reserve versus traditional vital signs.

    PubMed

    Stewart, Camille L; Mulligan, Jane; Grudic, Greg Z; Convertino, Victor A; Moulton, Steven L

    2014-12-01

    Humans are able to compensate for low-volume blood loss with minimal change in traditional vital signs. We hypothesized that a novel algorithm, which analyzes photoplethysmogram (PPG) wave forms to continuously estimate compensatory reserve would provide greater sensitivity and specificity to detect low-volume blood loss compared with traditional vital signs. The compensatory reserve index (CRI) is a measure of the reserve remaining to compensate for reduced central blood volume, where a CRI of 1 represents supine normovolemia and 0 represents the circulating blood volume at which hemodynamic decompensation occurs; values between 1 and 0 indicate the proportion of reserve remaining. Subjects underwent voluntary donation of 1 U (approximately 450 mL) of blood. Demographic and continuous noninvasive vital sign wave form data were collected, including PPG, heart rate, systolic blood pressure, cardiac output, and stroke volume. PPG wave forms were later processed by the algorithm to estimate CRI values. Data were collected from 244 healthy subjects (79 males and 165 females), with a mean (SD) age of 40.1 (14.2) years and mean (SD) body mass index of 25.6 (4.7). After blood donation, CRI significantly decreased in 92% (α = 0.05; 95% confidence interval [CI], 88-95%) of the subjects. With the use of a threshold decrease in CRI of 0.05 or greater for the detection of 1 U of blood loss, the receiver operating characteristic area under the curve was 0.90, with a sensitivity of 0.84 and specificity of 0.86. In comparison, systolic blood pressure (52%; 95% CI, 45-59%), heart rate (65%; 95% CI, 58-72%), cardiac output (47%; 95% CI, 40-54%), and stroke volume (74%; 95% CI, 67-80%) changed in fewer subjects, had significantly lower receiver operating characteristic area under the curve values, and significantly lower specificities for detecting the same volume of blood loss. Consistent with our hypothesis, CRI detected low-volume blood loss with significantly greater specificity than other traditional physiologic measures. These findings warrant further evaluation of the CRI algorithm in actual trauma settings. Diagnostic study, level II.

  5. Overnight non-contact continuous vital signs monitoring using an intelligent automatic beam-steering Doppler sensor at 2.4 GHz.

    PubMed

    Batchu, S; Narasimhachar, H; Mayeda, J C; Hall, T; Lopez, J; Nguyen, T; Banister, R E; Lie, D Y C

    2017-07-01

    Doppler-based non-contact vital signs (NCVS) sensors can monitor heart rates, respiration rates, and motions of patients without physically touching them. We have developed a novel single-board Doppler-based phased-array antenna NCVS biosensor system that can perform robust overnight continuous NCVS monitoring with intelligent automatic subject tracking and optimal beam steering algorithms. Our NCVS sensor achieved overnight continuous vital signs monitoring with an impressive heart-rate monitoring accuracy of over 94% (i.e., within ±5 Beats-Per-Minute vs. a reference sensor), analyzed from over 400,000 data points collected during each overnight monitoring period of ~ 6 hours at a distance of 1.75 meters. The data suggests our intelligent phased-array NCVS sensor can be very attractive for continuous monitoring of low-acuity patients.

  6. The Communication Link and Error ANalysis (CLEAN) simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.; Crowe, Shane

    1993-01-01

    During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

  7. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  8. Crisis management during anaesthesia: hypotension.

    PubMed

    Morris, R W; Watterson, L M; Westhorpe, R N; Webb, R K

    2005-06-01

    Hypotension is commonly encountered in association with anaesthesia and surgery. Uncorrected and sustained it puts the brain, heart, kidneys, and the fetus in pregnancy at risk of permanent or even fatal damage. Its recognition and correction is time critical, especially in patients with pre-existing disease that compromises organ perfusion. To examine the role of a previously described core algorithm "COVER ABCD-A SWIFT CHECK", supplemented by a specific sub-algorithm for hypotension, in the management of hypotension when it occurs in association with anaesthesia. Reports of hypotension during anaesthesia were extracted and studied from the first 4000 incidents reported to the Australian Incident Monitoring Study (AIMS). The potential performance of the COVER ABCD algorithm and the sub-algorithm for hypotension was compared with the actual management as reported by the anaesthetist involved. There were 438 reports that mentioned hypotension, cardiovascular collapse, or cardiac arrest. In 17% of reports more than one cause was attributed and 550 causative events were identified overall. The most common causes identified were drugs (26%), regional anaesthesia (14%), and hypovolaemia (9%). Concomitant changes were reported in heart rate or rhythm in 39% and oxygen saturation or ventilation in 21% of reports. Cardiac arrest was documented in 25% of reports. As hypotension was frequently associated with abnormalities of other vital signs, it could not always be adequately addressed by a single algorithm. The sub-algorithm for hypotension is adequate when hypotension occurs in association with sinus tachycardia. However, when it occurs in association with bradycardia, non-sinus tachycardia, desaturation or signs of anaphylaxis or other problems, the sub-algorithm for hypotension recommends cross referencing to other relevant sub-algorithms. It was considered that, correctly applied, the core algorithm COVER ABCD would have diagnosed 18% of cases and led to resolution in two thirds of these. It was further estimated that completion of this followed by the specific sub-algorithm for hypotension would have led to earlier recognition of the problem and/or better management in 6% of cases compared with actual management reported. Pattern recognition in most cases enables anaesthetists to determine the cause and manage hypotension. However, an algorithm based approach is likely to improve the management of a small proportion of atypical but potentially life threatening cases. While an algorithm based approach will facilitate crisis management, the frequency of co-existing abnormalities in other vital signs means that all cases of hypotension cannot be dealt with using a single algorithm. Diagnosis, in particular, may potentially be assisted by cross referencing to the specific sub-algorithms for these.

  9. Development of a triage engine enabling behavior recognition and lethal arrhythmia detection for remote health care system.

    PubMed

    Sugano, Hiroto; Hara, Shinsuke; Tsujioka, Tetsuo; Inoue, Tadayuki; Nakajima, Shigeyoshi; Kozaki, Takaaki; Namkamura, Hajime; Takeuchi, Kazuhide

    2011-01-01

    For ubiquitous health care systems which continuously monitor a person's vital signs such as electrocardiogram (ECG), body surface temperature and three-dimensional (3D) acceleration by wireless, it is important to accurately detect the occurrence of an abnormal event in the data and immediately inform a medical doctor of its detail. In this paper, we introduce a remote health care system, which is composed of a wireless vital sensor, multiple receivers and a triage engine installed in a desktop personal computer (PC). The middleware installed in the receiver, which was developed in C++, supports reliable data handling of vital data to the ethernet port. On the other hand, the human interface of the triage engine, which was developed in JAVA, shows graphics on his/her ECG data, 3D acceleration data, body surface temperature data and behavior status in the display of the desktop PC and sends an urgent e-mail containing the display data to a pre-registered medical doctor when it detects the occurrence of an abnormal event. In the triage engine, the lethal arrhythmia detection algorithm based on short time Fourier transform (STFT) analysis can achieve 100 % sensitivity and 99.99 % specificity, and the behavior recognition algorithm based on the combination of the nearest neighbor method and the Naive Bayes method can achieve more than 71 % classification accuracy.

  10. Validation of TGLF in C-Mod and DIII-D using machine learning and integrated modeling tools

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, P.; White, Ae; Cao, Nm; Creely, Aj; Greenwald, Mj; Grierson, Ba; Howard, Nt; Meneghini, O.; Petty, Cc; Rice, Je; Sciortino, F.; Yuan, X.

    2017-10-01

    Predictive models for steady-state and perturbative transport are necessary to support burning plasma operations. A combination of machine learning algorithms and integrated modeling tools is used to validate TGLF in C-Mod and DIII-D. First, a new code suite, VITALS, is used to compare SAT1 and SAT0 models in C-Mod. VITALS exploits machine learning and optimization algorithms for the validation of transport codes. Unlike SAT0, the SAT1 saturation rule contains a model to capture cross-scale turbulence coupling. Results show that SAT1 agrees better with experiments, further confirming that multi-scale effects are needed to model heat transport in C-Mod L-modes. VITALS will next be used to analyze past data from DIII-D: L-mode ``Shortfall'' plasma and ECH swing experiments. A second code suite, PRIMA, allows for integrated modeling of the plasma response to Laser Blow-Off cold pulses. Preliminary results show that SAT1 qualitatively reproduces the propagation of cold pulses after LBO injections and SAT0 does not, indicating that cross-scale coupling effects play a role in the plasma response. PRIMA will be used to ``predict-first'' cold pulse experiments using the new LBO system at DIII-D, and analyze existing ECH heat pulse data. Work supported by DE-FC02-99ER54512, DE-FC02-04ER54698.

  11. Detection of Multiple Stationary Humans Using UWB MIMO Radar.

    PubMed

    Liang, Fulai; Qi, Fugui; An, Qiang; Lv, Hao; Chen, Fuming; Li, Zhao; Wang, Jianqi

    2016-11-16

    Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect), detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB) multiple-input and multiple-output (MIMO) radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR) of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR), morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls.

  12. Detection of Multiple Stationary Humans Using UWB MIMO Radar

    PubMed Central

    Liang, Fulai; Qi, Fugui; An, Qiang; Lv, Hao; Chen, Fuming; Li, Zhao; Wang, Jianqi

    2016-01-01

    Remarkable progress has been achieved in the detection of single stationary human. However, restricted by the mutual interference of multiple humans (e.g., strong sidelobes of the torsos and the shadow effect), detection and localization of the multiple stationary humans remains a huge challenge. In this paper, ultra-wideband (UWB) multiple-input and multiple-output (MIMO) radar is exploited to improve the detection performance of multiple stationary humans for its multiple sight angles and high-resolution two-dimensional imaging capacity. A signal model of the vital sign considering both bi-static angles and attitude angle of the human body is firstly developed, and then a novel detection method is proposed to detect and localize multiple stationary humans. In this method, preprocessing is firstly implemented to improve the signal-to-noise ratio (SNR) of the vital signs, and then a vital-sign-enhanced imaging algorithm is presented to suppress the environmental clutters and mutual affection of multiple humans. Finally, an automatic detection algorithm including constant false alarm rate (CFAR), morphological filtering and clustering is implemented to improve the detection performance of weak human targets affected by heavy clutters and shadow effect. The simulation and experimental results show that the proposed method can get a high-quality image of multiple humans and we can use it to discriminate and localize multiple adjacent human targets behind brick walls. PMID:27854356

  13. A comparison of accuracy and computational feasibility of two record linkage algorithms in retrieving vital status information from HIV/AIDS patients registered in Brazilian public databases.

    PubMed

    de Paula, Adelzon Assis; Pires, Denise Franqueira; Filho, Pedro Alves; de Lemos, Kátia Regina Valente; Barçante, Eduardo; Pacheco, Antonio Guilherme

    2018-06-01

    While cross-referencing information from people living with HIV/AIDS (PLWHA) to the official mortality database is a critical step in monitoring the HIV/AIDS epidemic in Brazil, the accuracy of the linkage routine may compromise the validity of the final database, yielding to biased epidemiological estimates. We compared the accuracy and the total runtime of two linkage algorithms applied to retrieve vital status information from PLWHA in Brazilian public databases. Nominally identified records from PLWHA were obtained from three distinct government databases. Linkage routines included an algorithm in Python language (PLA) and Reclink software (RlS), a probabilistic software largely utilized in Brazil. Records from PLWHA 1 known to be alive were added to those from patients reported as deceased. Data were then searched into the mortality system. Scenarios where 5% and 50% of patients actually dead were simulated, considering both complete cases and 20% missing maternal names. When complete information was available both algorithms had comparable accuracies. In the scenario of 20% missing maternal names, PLA 2 and RlS 3 had sensitivities of 94.5% and 94.6% (p > 0.5), respectively; after manual reviewing, PLA sensitivity increased to 98.4% (96.6-100.0) exceeding that for RlS (p < 0.01). PLA had higher positive predictive value in 5% death proportion. Manual reviewing was intrinsically required by RlS in up to 14% register for people actually dead, whereas the corresponding proportion ranged from 1.5% to 2% for PLA. The lack of manual inspection did not alter PLA sensitivity when complete information was available. When incomplete data was available PLA sensitivity increased from 94.5% to 98.4%, thus exceeding that presented by RlS (94.6%, p < 0.05). RlS spanned considerably less processing time compared to PLA. Both linkage algorithms presented interchangeable accuracies in retrieving vital status data from PLWHA. RlS had a considerably lesser runtime but intrinsically required manually reviewing a fastidious proportion of the matched registries. On the other hand, PLA spent quite more runtime but spared manual reviewing at no expense of accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Robust Data Detection for the Photon-Counting Free-Space Optical System With Implicit CSI Acquisition and Background Radiation Compensation

    NASA Astrophysics Data System (ADS)

    Song, Tianyu; Kam, Pooi-Yuen

    2016-02-01

    Since atmospheric turbulence and pointing errors cause signal intensity fluctuations and the background radiation surrounding the free-space optical (FSO) receiver contributes an undesired noisy component, the receiver requires accurate channel state information (CSI) and background information to adjust the detection threshold. In most previous studies, for CSI acquisition, pilot symbols were employed, which leads to a reduction of spectral and energy efficiency; and an impractical assumption that the background radiation component is perfectly known was made. In this paper, we develop an efficient and robust sequence receiver, which acquires the CSI and the background information implicitly and requires no knowledge about the channel model information. It is robust since it can automatically estimate the CSI and background component and detect the data sequence accordingly. Its decision metric has a simple form and involves no integrals, and thus can be easily evaluated. A Viterbi-type trellis-search algorithm is adopted to improve the search efficiency, and a selective-store strategy is adopted to overcome a potential error floor problem as well as to increase the memory efficiency. To further simplify the receiver, a decision-feedback symbol-by-symbol receiver is proposed as an approximation of the sequence receiver. By simulations and theoretical analysis, we show that the performance of both the sequence receiver and the symbol-by-symbol receiver, approach that of detection with perfect knowledge of the CSI and background radiation, as the length of the window for forming the decision metric increases.

  15. Hidden Markov model tracking of continuous gravitational waves from a binary neutron star with wandering spin. II. Binary orbital phase tracking

    NASA Astrophysics Data System (ADS)

    Suvorova, S.; Clearwater, P.; Melatos, A.; Sun, L.; Moran, W.; Evans, R. J.

    2017-11-01

    A hidden Markov model (HMM) scheme for tracking continuous-wave gravitational radiation from neutron stars in low-mass x-ray binaries (LMXBs) with wandering spin is extended by introducing a frequency-domain matched filter, called the J -statistic, which sums the signal power in orbital sidebands coherently. The J -statistic is similar but not identical to the binary-modulated F -statistic computed by demodulation or resampling. By injecting synthetic LMXB signals into Gaussian noise characteristic of the Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO), it is shown that the J -statistic HMM tracker detects signals with characteristic wave strain h0≥2 ×10-26 in 370 d of data from two interferometers, divided into 37 coherent blocks of equal length. When applied to data from Stage I of the Scorpius X-1 Mock Data Challenge organized by the LIGO Scientific Collaboration, the tracker detects all 50 closed injections (h0≥6.84 ×10-26), recovering the frequency with a root-mean-square accuracy of ≤1.95 ×10-5 Hz . Of the 50 injections, 43 (with h0≥1.09 ×10-25) are detected in a single, coherent 10 d block of data. The tracker employs an efficient, recursive HMM solver based on the Viterbi algorithm, which requires ˜105 CPU-hours for a typical broadband (0.5 kHz) LMXB search.

  16. Image registration under translation and rotation in two-dimensional planes using Fourier slice theorem.

    PubMed

    Pohit, M; Sharma, J

    2015-05-10

    Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.

  17. Communications terminal breadboard

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A baseline design is presented of a digital communications link between an advanced manned spacecraft (AMS) and an earth terminal via an Intelsat 4 type communications satellite used as a geosynchronous orbiting relay station. The fabrication, integration, and testing of terminal elements at each end of the link are discussed. In the baseline link design, the information carrying capacity of the link was estimated for both the forward direction (earth terminal to AMS) and the return direction, based upon orbital geometry, relay satellite characteristics, terminal characteristics, and the improvement that can be achieved by the use of convolutional coding/Viterbi decoding techniques.

  18. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  19. Home telemonitoring of vital signs--technical challenges and future directions.

    PubMed

    Celler, Branko G; Sparks, Ross S

    2015-01-01

    The telemonitoring of vital signs from the home is an essential element of telehealth services for the management of patients with chronic conditions, such as congestive heart failure (CHF), chronic obstructive pulmonary disease (COPD), diabetes, or poorly controlled hypertension. Telehealth is now being deployed widely in both rural and urban settings, and in this paper, we discuss the contribution made by biomedical instrumentation, user interfaces, and automated risk stratification algorithms in developing a clinical diagnostic quality longitudinal health record at home. We identify technical challenges in the acquisition of high-quality biometric signals from unsupervised patients at home, identify new technical solutions and user interfaces, and propose new measurement modalities and signal processing techniques for increasing the quality and value of vital signs monitoring at home. We also discuss use of vital signs data for the automated risk stratification of patients, so that clinical resources can be targeted to those most at risk of unscheduled admission to hospital. New research is also proposed to integrate primary care, hospital, personal genomic, and telehealth electronic health records, and apply predictive analytics and data mining for enhancing clinical decision support.

  20. Vital nodes identification in complex networks

    NASA Astrophysics Data System (ADS)

    Lü, Linyuan; Chen, Duanbing; Ren, Xiao-Long; Zhang, Qian-Ming; Zhang, Yi-Cheng; Zhou, Tao

    2016-09-01

    Real networks exhibit heterogeneous nature with nodes playing far different roles in structure and function. To identify vital nodes is thus very significant, allowing us to control the outbreak of epidemics, to conduct advertisements for e-commercial products, to predict popular scientific publications, and so on. The vital nodes identification attracts increasing attentions from both computer science and physical societies, with algorithms ranging from simply counting the immediate neighbors to complicated machine learning and message passing approaches. In this review, we clarify the concepts and metrics, classify the problems and methods, as well as review the important progresses and describe the state of the art. Furthermore, we provide extensive empirical analyses to compare well-known methods on disparate real networks, and highlight the future directions. In spite of the emphasis on physics-rooted approaches, the unification of the language and comparison with cross-domain methods would trigger interdisciplinary solutions in the near future.

  1. A novel angle computation and calibration algorithm of bio-inspired sky-light polarization navigation sensor.

    PubMed

    Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao

    2014-09-15

    Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice.

  2. Comparison of Dijkstra's algorithm and dynamic programming method in finding shortest path for order picker in a warehouse

    NASA Astrophysics Data System (ADS)

    Nordin, Noraimi Azlin Mohd; Omar, Mohd; Sharif, S. Sarifah Radiah

    2017-04-01

    Companies are looking forward to improve their productivity within their warehouse operations and distribution centres. In a typical warehouse operation, order picking contributes more than half percentage of the operating costs. Order picking is a benchmark in measuring the performance and productivity improvement of any warehouse management. Solving order picking problem is crucial in reducing response time and waiting time of a customer in receiving his demands. To reduce the response time, proper routing for picking orders is vital. Moreover, in production line, it is vital to always make sure the supplies arrive on time. Hence, a sample routing network will be applied on EP Manufacturing Berhad (EPMB) as a case study. The Dijkstra's algorithm and Dynamic Programming method are applied to find the shortest distance for an order picker in order picking. The results show that the Dynamic programming method is a simple yet competent approach in finding the shortest distance to pick an order that is applicable in a warehouse within a short time period.

  3. Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios

    NASA Astrophysics Data System (ADS)

    Ozden, Mehmet Tahir

    2015-12-01

    An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.

  4. Multiresolution wavelet analysis for efficient analysis, compression and remote display of long-term physiological signals.

    PubMed

    Khuan, L Y; Bister, M; Blanchfield, P; Salleh, Y M; Ali, R A; Chan, T H

    2006-06-01

    Increased inter-equipment connectivity coupled with advances in Web technology allows ever escalating amounts of physiological data to be produced, far too much to be displayed adequately on a single computer screen. The consequence is that large quantities of insignificant data will be transmitted and reviewed. This carries an increased risk of overlooking vitally important transients. This paper describes a technique to provide an integrated solution based on a single algorithm for the efficient analysis, compression and remote display of long-term physiological signals with infrequent short duration, yet vital events, to effect a reduction in data transmission and display cluttering and to facilitate reliable data interpretation. The algorithm analyses data at the server end and flags significant events. It produces a compressed version of the signal at a lower resolution that can be satisfactorily viewed in a single screen width. This reduced set of data is initially transmitted together with a set of 'flags' indicating where significant events occur. Subsequent transmissions need only involve transmission of flagged data segments of interest at the required resolution. Efficient processing and code protection with decomposition alone is novel. The fixed transmission length method ensures clutter-less display, irrespective of the data length. The flagging of annotated events in arterial oxygen saturation, electroencephalogram and electrocardiogram illustrates the generic property of the algorithm. Data reduction of 87% to 99% and improved displays are demonstrated.

  5. Vital signs and their cross-correlation in sepsis and NEC: a study of 1,065 very-low-birth-weight infants in two NICUs.

    PubMed

    Fairchild, Karen D; Lake, Douglas E; Kattwinkel, John; Moorman, J Randall; Bateman, David A; Grieve, Philip G; Isler, Joseph R; Sahni, Rakesh

    2017-02-01

    Subtle changes in vital signs and their interactions occur in preterm infants prior to overt deterioration from late-onset septicemia (LOS) or necrotizing enterocolitis (NEC). Optimizing predictive algorithms may lead to earlier treatment. For 1,065 very-low-birth-weight (VLBW) infants in two neonatal intensive care units (NICUs), mean, SD, and cross-correlation of respiratory rate, heart rate (HR), and oxygen saturation (SpO 2 ) were analyzed hourly (131 infant-years' data). Cross-correlation (cotrending) between two vital signs was measured allowing a lag of ± 30 s. Cases of LOS and NEC were identified retrospectively (n = 186) and vital sign models were evaluated for ability to predict illness diagnosed in the ensuing 24 h. The best single illness predictor within and between institutions was cross-correlation of HR-SpO 2 . The best combined model (mean SpO 2 , SDHR, and cross-correlation of HR-SpO 2 ,) trained at one site with ROC area 0.695 had external ROC area of 0.754 at the other site, and provided additive value to an established HR characteristics index for illness prediction (Net Reclassification Improvement: 0.205; 95% confidence interval (CI): 0.113, 0.328). Despite minor inter-institutional differences in vital sign patterns of VLBW infants, cross-correlation of HR-SpO 2 and a 3-variable vital sign model performed well at both centers for preclinical detection of sepsis or NEC.

  6. Vital signs and their cross-correlation in sepsis and NEC: A study of 1065 very low birth weight infants in two NICUs

    PubMed Central

    Fairchild, Karen D.; Lake, Douglas E.; Kattwinkel, John; Moorman, J. Randall; Bateman, David A; Grieve, Philip G; Isler, Joseph R; Sahni, Rakesh

    2016-01-01

    Background Subtle changes in vital signs and their interactions occur in preterm infants prior to overt deterioration from late-onset septicemia (LOS) or necrotizing enterocolitis (NEC). Optimizing predictive algorithms may lead to earlier treatment. Methods For 1065 very low birth weight (VLBW) infants in two NICUs, mean, SD, and cross-correlation of respiratory rate, heart rate (HR), and oxygen saturation (SpO2) were analyzed hourly (131 infant-years’ data). Cross-correlation (co-trending) between two vital signs was measured allowing a lag of +/− 30 seconds. Cases of LOS and NEC were identified retrospectively (n=186) and vital sign models were evaluated for ability to predict illness diagnosed in the ensuing 24h. Results The best single illness predictor within and between institutions was cross-correlation of HR-SpO2. The best combined model (mean SpO2, SD HR, and cross correlation of HR-SpO2,) trained at one site with ROC area 0.695 had external ROC area of 0.754 at the other site, and provided additive value to an established HR characteristics index for illness prediction (Net Reclassification Improvement 0.25, 95% CI 0.113, 0.328). Conclusion Despite minor inter-institutional differences in vital sign patterns of VLBW infants, cross-correlation of HR-SpO2 and a 3-variable vital sign model performed well at both centers for preclinical detection of sepsis or NEC. PMID:28001143

  7. Vlsi implementation of flexible architecture for decision tree classification in data mining

    NASA Astrophysics Data System (ADS)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  8. Selected-node stochastic simulation algorithm

    NASA Astrophysics Data System (ADS)

    Duso, Lorenzo; Zechner, Christoph

    2018-04-01

    Stochastic simulations of biochemical networks are of vital importance for understanding complex dynamics in cells and tissues. However, existing methods to perform such simulations are associated with computational difficulties and addressing those remains a daunting challenge to the present. Here we introduce the selected-node stochastic simulation algorithm (snSSA), which allows us to exclusively simulate an arbitrary, selected subset of molecular species of a possibly large and complex reaction network. The algorithm is based on an analytical elimination of chemical species, thereby avoiding explicit simulation of the associated chemical events. These species are instead described continuously in terms of statistical moments derived from a stochastic filtering equation, resulting in a substantial speedup when compared to Gillespie's stochastic simulation algorithm (SSA). Moreover, we show that statistics obtained via snSSA profit from a variance reduction, which can significantly lower the number of Monte Carlo samples needed to achieve a certain performance. We demonstrate the algorithm using several biological case studies for which the simulation time could be reduced by orders of magnitude.

  9. An efficient motion-resistant method for wearable pulse oximeter.

    PubMed

    Yan, Yong-Sheng; Zhang, Yuan-Ting

    2008-05-01

    Reduction of motion artifact and power saving are crucial in designing a wearable pulse oximeter for long-term telemedicine application. In this paper, a novel algorithm, minimum correlation discrete saturation transform (MCDST) has been developed for the estimation of arterial oxygen saturation (SaO2), based on an optical model derived from photon diffusion analysis. The simulation shows that the new algorithm MCDST is more robust under low SNRs than the clinically verified motion-resistant algorithm discrete saturation transform (DST). Further, the experiment with different severity of motions demonstrates that MCDST has a slightly better performance than DST algorithm. Moreover, MCDST is more computationally efficient than DST because the former uses linear algebra instead of the time-consuming adaptive filter used by latter, which indicates that MCDST can reduce the required power consumption and circuit complexity of the implementation. This is vital for wearable devices, where the physical size and long battery life are crucial.

  10. Toward detecting deception in intelligent systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene, Jr.; Johnson, Gregory, Jr.

    2004-08-01

    Contemporary decision makers often must choose a course of action using knowledge from several sources. Knowledge may be provided from many diverse sources including electronic sources such as knowledge-based diagnostic or decision support systems or through data mining techniques. As the decision maker becomes more dependent on these electronic information sources, detecting deceptive information from these sources becomes vital to making a correct, or at least more informed, decision. This applies to unintentional disinformation as well as intentional misinformation. Our ongoing research focuses on employing models of deception and deception detection from the fields of psychology and cognitive science to these systems as well as implementing deception detection algorithms for probabilistic intelligent systems. The deception detection algorithms are used to detect, classify and correct attempts at deception. Algorithms for detecting unexpected information rely upon a prediction algorithm from the collaborative filtering domain to predict agent responses in a multi-agent system.

  11. An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks

    PubMed Central

    Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed

    2016-01-01

    Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586

  12. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  13. The use of interleaving for reducing radio loss in convolutionally coded systems

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.; Yuen, J. H.

    1989-01-01

    The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.

  14. A Simulation System for Validating the Analytical Prediction of Performance of the Convolutional Encoded and Symbol Interleaved TDRSS S-band Return Link Service in a Pulsed RFI Environment

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A hardware integrated convolutional coding/symbol interleaving and integrated symbol deinterleaving/Viterbi decoding simulation system is described. Validation on the system of the performance of the TDRSS S-band return link with BPSK modulation, operating in a pulsed RFI environment is included. The system consists of three components, the Fast Linkabit Error Rate Tester (FLERT), the Transition Probability Generator (TPG), and a modified LV7017B which includes rate 1/3 capability as well as a periodic interleaver/deinterleaver. Operating and maintenance manuals for each of these units are included.

  15. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  16. Emotion detection from text

    NASA Astrophysics Data System (ADS)

    Ramalingam, V. V.; Pandian, A.; Jaiswal, Abhijeet; Bhatia, Nikhar

    2018-04-01

    This paper presents a novel method based on concept of Machine Learning for Emotion Detection using various algorithms of Support Vector Machine and major emotions described are linked to the Word-Net for enhanced accuracy. The approach proposed plays a promising role to augment the Artificial Intelligence in the near future and could be vital in optimization of Human-Machine Interface.

  17. The development of a whole-body algorithm

    NASA Technical Reports Server (NTRS)

    Kay, F. J.

    1973-01-01

    The whole-body algorithm is envisioned as a mathematical model that utilizes human physiology to simulate the behavior of vital body systems. The objective of this model is to determine the response of selected body parameters within these systems to various input perturbations, or stresses. Perturbations of interest are exercise, chemical unbalances, gravitational changes and other abnormal environmental conditions. This model provides for a study of man's physiological response in various space applications, underwater applications, normal and abnormal workloads and environments, and the functioning of the system with physical impairments or decay of functioning components. Many methods or approaches to the development of a whole-body algorithm are considered. Of foremost concern is the determination of the subsystems to be included, the detail of the subsystems and the interaction between the subsystems.

  18. Real alerts and artifact classification in archived multi-signal vital sign monitoring data: implications for mining big data.

    PubMed

    Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R

    2016-12-01

    Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby "cleaning" such data for future modeling. 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data [heart rate (HR), respiratory rate (RR), peripheral arterial oxygen saturation (SpO 2 ) at 1/20 Hz, and noninvasive oscillometric blood pressure (BP)]. Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. Block 1 yielded 812 VS events, with 214 (26 %) judged by experts as artifact (RR 43 %, SpO 2 40 %, BP 15 %, HR 2 %). ML algorithms applied to the Block 1 training/cross-validation set (tenfold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO 2 . Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO 2 . ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.

  19. Real Alerts and Artifact Classification in Archived Multi-signal Vital Sign Monitoring Data—Implications for Mining Big Data — Implications for Mining Big Data

    PubMed Central

    Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R.

    2015-01-01

    PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building. PMID:26438655

  20. A Novel Angle Computation and Calibration Algorithm of Bio-Inspired Sky-Light Polarization Navigation Sensor

    PubMed Central

    Xian, Zhiwen; Hu, Xiaoping; Lian, Junxiang; Zhang, Lilian; Cao, Juliang; Wang, Yujie; Ma, Tao

    2014-01-01

    Navigation plays a vital role in our daily life. As traditional and commonly used navigation technologies, Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS) can provide accurate location information, but suffer from the accumulative error of inertial sensors and cannot be used in a satellite denied environment. The remarkable navigation ability of animals shows that the pattern of the polarization sky can be used for navigation. A bio-inspired POLarization Navigation Sensor (POLNS) is constructed to detect the polarization of skylight. Contrary to the previous approach, we utilize all the outputs of POLNS to compute input polarization angle, based on Least Squares, which provides optimal angle estimation. In addition, a new sensor calibration algorithm is presented, in which the installation angle errors and sensor biases are taken into consideration. Derivation and implementation of our calibration algorithm are discussed in detail. To evaluate the performance of our algorithms, simulation and real data test are done to compare our algorithms with several exiting algorithms. Comparison results indicate that our algorithms are superior to the others and are more feasible and effective in practice. PMID:25225872

  1. On complexity of trellis structure of linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1990-01-01

    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.

  2. A fundamental conflict of care: Nurses' accounts of balancing patients' sleep with taking vital sign observations at night.

    PubMed

    Hope, Joanna; Recio-Saucedo, Alejandra; Fogg, Carole; Griffiths, Peter; Smith, Gary B; Westwood, Greta; Schmidt, Paul E

    2017-12-21

    To explore why adherence to vital sign observations scheduled by an early warning score protocol reduces at night. Regular vital sign observations can reduce avoidable deterioration in hospital. early warning score protocols set the frequency of these observations by the severity of a patient's condition. Vital sign observations are taken less frequently at night, even with an early warning score in place, but no literature has explored why. A qualitative interpretative design informed this study. Seventeen semi-structured interviews with nursing staff working on wards with varying levels of adherence to scheduled vital sign observations. A thematic analysis approach was used. At night, nursing teams found it difficult to balance the competing care goals of supporting sleep with taking vital sign observations. The night-time frequency of these observations was determined by clinical judgement, ward-level expectations of observation timing and the risk of disturbing other patients. Patients with COPD or dementia could be under-monitored, while patients nearing the end of life could be over-monitored. In this study, we found an early warning score algorithm focused on deterioration prevention did not account for long-term management or palliative care trajectories. Nurses were therefore less inclined to wake such patients to take vital sign observations at night. However, the perception of widespread exceptions and lack of evidence regarding optimum frequency risks delegitimising the early warning score approach. This may pose a risk to patient safety, particularly patients with dementia or chronic conditions. Nurses should document exceptions and discuss these with the wider team. Hospitals should monitor why vital sign observations are missed at night, identify which groups are under-monitored and provide guidance on prioritising competing expectations. early warning score protocols should take account of different care trajectories. © 2017 The Authors. Journal of Clinical Nursing Published by John Wiley & Sons Ltd.

  3. Respiratory rate estimation during triage of children in hospitals.

    PubMed

    Shah, Syed Ahmar; Fleming, Susannah; Thompson, Matthew; Tarassenko, Lionel

    2015-01-01

    Accurate assessment of a child's health is critical for appropriate allocation of medical resources and timely delivery of healthcare in Emergency Departments. The accurate measurement of vital signs is a key step in the determination of the severity of illness and respiratory rate is currently the most difficult vital sign to measure accurately. Several previous studies have attempted to extract respiratory rate from photoplethysmogram (PPG) recordings. However, the majority have been conducted in controlled settings using PPG recordings from healthy subjects. In many studies, manual selection of clean sections of PPG recordings was undertaken before assessing the accuracy of the signal processing algorithms developed. Such selection procedures are not appropriate in clinical settings. A major limitation of AR modelling, previously applied to respiratory rate estimation, is an appropriate selection of model order. This study developed a novel algorithm that automatically estimates respiratory rate from a median spectrum constructed applying multiple AR models to processed PPG segments acquired with pulse oximetry using a finger probe. Good-quality sections were identified using a dynamic template-matching technique to assess PPG signal quality. The algorithm was validated on 205 children presenting to the Emergency Department at the John Radcliffe Hospital, Oxford, UK, with reference respiratory rates up to 50 breaths per minute estimated by paediatric nurses. At the time of writing, the authors are not aware of any other study that has validated respiratory rate estimation using data collected from over 200 children in hospitals during routine triage.

  4. Outlining Purposes, Stating the Nature of the Present Research, and Listing Research Questions or Hypotheses in Academic Papers

    ERIC Educational Resources Information Center

    Shehzad, Wasima

    2011-01-01

    Driving research questions from the prevailing issues and interests and developing from them new theories, formulas, algorithms, methods, and designs, and linking them to the interests of the larger audience is a vital component of scientific research papers. The present article discusses outlining purposes or stating the nature of the present…

  5. Stable Algorithm For Estimating Airdata From Flush Surface Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen, A. (Inventor); Cobleigh, Brent R. (Inventor); Haering, Edward A., Jr. (Inventor)

    2001-01-01

    An airdata estimation and evaluation system and method, including a stable algorithm for estimating airdata from nonintrusive surface pressure measurements. The airdata estimation and evaluation system is preferably implemented in a flush airdata sensing (FADS) system. The system and method of the present invention take a flow model equation and transform it into a triples formulation equation. The triples formulation equation eliminates the pressure related states from the flow model equation by strategically taking the differences of three surface pressures, known as triples. This triples formulation equation is then used to accurately estimate and compute vital airdata from nonintrusive surface pressure measurements.

  6. Estimation of Cardiopulmonary Parameters From Ultra Wideband Radar Measurements Using the State Space Method.

    PubMed

    Naishadham, Krishna; Piou, Jean E; Ren, Lingyun; Fathy, Aly E

    2016-12-01

    Ultra wideband (UWB) Doppler radar has many biomedical applications, including remote diagnosis of cardiovascular disease, triage and real-time personnel tracking in rescue missions. It uses narrow pulses to probe the human body and detect tiny cardiopulmonary movements by spectral analysis of the backscattered electromagnetic (EM) field. With the help of super-resolution spectral algorithms, UWB radar is capable of increased accuracy for estimating vital signs such as heart and respiration rates in adverse signal-to-noise conditions. A major challenge for biomedical radar systems is detecting the heartbeat of a subject with high accuracy, because of minute thorax motion (less than 0.5 mm) caused by the heartbeat. The problem becomes compounded by EM clutter and noise in the environment. In this paper, we introduce a new algorithm based on the state space method (SSM) for the extraction of cardiac and respiration rates from UWB radar measurements. SSM produces range-dependent system poles that can be classified parametrically with spectral peaks at the cardiac and respiratory frequencies. It is shown that SSM produces accurate estimates of the vital signs without producing harmonics and inter-modulation products that plague signal resolution in widely used FFT spectrograms.

  7. Channel modeling, signal processing and coding for perpendicular magnetic recording

    NASA Astrophysics Data System (ADS)

    Wu, Zheng

    With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.

  8. Detection of core-periphery structure in networks based on 3-tuple motifs

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Xiang, Bing-Bing; Chen, Han-Shuang; Small, Michael; Zhang, Hai-Feng

    2018-05-01

    Detecting mesoscale structure, such as community structure, is of vital importance for analyzing complex networks. Recently, a new mesoscale structure, core-periphery (CP) structure, has been identified in many real-world systems. In this paper, we propose an effective algorithm for detecting CP structure based on a 3-tuple motif. In this algorithm, we first define a 3-tuple motif in terms of the patterns of edges as well as the property of nodes, and then a motif adjacency matrix is constructed based on the 3-tuple motif. Finally, the problem is converted to find a cluster that minimizes the smallest motif conductance. Our algorithm works well in different CP structures: including single or multiple CP structure, and local or global CP structures. Results on the synthetic and the empirical networks validate the high performance of our method.

  9. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  10. Network-based recommendation algorithms: A review

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Zeng, An; Gillard, Sébastien; Medo, Matúš

    2016-06-01

    Recommender systems are a vital tool that helps us to overcome the information overload problem. They are being used by most e-commerce web sites and attract the interest of a broad scientific community. A recommender system uses data on users' past preferences to choose new items that might be appreciated by a given individual user. While many approaches to recommendation exist, the approach based on a network representation of the input data has gained considerable attention in the past. We review here a broad range of network-based recommendation algorithms and for the first time compare their performance on three distinct real datasets. We present recommendation topics that go beyond the mere question of which algorithm to use-such as the possible influence of recommendation on the evolution of systems that use it-and finally discuss open research directions and challenges.

  11. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    NASA Astrophysics Data System (ADS)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  12. Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms.

    PubMed

    Ozcift, Akin; Gulten, Arif

    2011-12-01

    Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Battery algorithm verification and development using hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  14. Increasing Safety of a Robotic System for Inner Ear Surgery Using Probabilistic Error Modeling Near Vital Anatomy

    PubMed Central

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2017-01-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy. PMID:29200595

  15. Increasing safety of a robotic system for inner ear surgery using probabilistic error modeling near vital anatomy

    NASA Astrophysics Data System (ADS)

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2016-03-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.

  16. Inferring nonlinear gene regulatory networks from gene expression data based on distance correlation.

    PubMed

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.

  17. Inferring Nonlinear Gene Regulatory Networks from Gene Expression Data Based on Distance Correlation

    PubMed Central

    Guo, Xiaobo; Zhang, Ye; Hu, Wenhao; Tan, Haizhu; Wang, Xueqin

    2014-01-01

    Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs). It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC) has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI)-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC) curve and the precision-recall (PR) curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference. PMID:24551058

  18. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  19. Spectrum-averaged Harmonic Path (SHAPA) algorithm for non-contact vital sign monitoring with ultra-wideband (UWB) radar.

    PubMed

    Van Nguyen; Javaid, Abdul Q; Weitnauer, Mary Ann

    2014-01-01

    We introduce the Spectrum-averaged Harmonic Path (SHAPA) algorithm for estimation of heart rate (HR) and respiration rate (RR) with Impulse Radio Ultrawideband (IR-UWB) radar. Periodic movement of human torso caused by respiration and heart beat induces fundamental frequencies and their harmonics at the respiration and heart rates. IR-UWB enables capture of these spectral components and frequency domain processing enables a low cost implementation. Most existing methods of identifying the fundamental component either in frequency or time domain to estimate the HR and/or RR lead to significant error if the fundamental is distorted or cancelled by interference. The SHAPA algorithm (1) takes advantage of the HR harmonics, where there is less interference, and (2) exploits the information in previous spectra to achieve more reliable and robust estimation of the fundamental frequency in the spectrum under consideration. Example experimental results for HR estimation demonstrate how our algorithm eliminates errors caused by interference and produces 16% to 60% more valid estimates.

  20. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  1. Unsupervised Indoor Localization Based on Smartphone Sensors, iBeacon and Wi-Fi.

    PubMed

    Chen, Jing; Zhang, Yi; Xue, Wei

    2018-04-28

    In this paper, we propose UILoc, an unsupervised indoor localization scheme that uses a combination of smartphone sensors, iBeacons and Wi-Fi fingerprints for reliable and accurate indoor localization with zero labor cost. Firstly, compared with the fingerprint-based method, the UILoc system can build a fingerprint database automatically without any site survey and the database will be applied in the fingerprint localization algorithm. Secondly, since the initial position is vital to the system, UILoc will provide the basic location estimation through the pedestrian dead reckoning (PDR) method. To provide accurate initial localization, this paper proposes an initial localization module, a weighted fusion algorithm combined with a k-nearest neighbors (KNN) algorithm and a least squares algorithm. In UILoc, we have also designed a reliable model to reduce the landmark correction error. Experimental results show that the UILoc can provide accurate positioning, the average localization error is about 1.1 m in the steady state, and the maximum error is 2.77 m.

  2. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  3. Combined coding and delay-throughput analysis for fading channels of mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Wang, C. C.; Yan, Tsun-Yee

    1986-01-01

    This paper presents the analysis of using the punctured convolutional code with Viterbi decoding to improve communications reliability. The punctured code rate is optimized so that the average delay is minimized. The coding gain in terms of the message delay is also defined. Since using punctured convolutional code with interleaving is still inadequate to combat the severe fading for short packets, the use of multiple copies of assignment and acknowledgment packets is suggested. The performance on the average end-to-end delay of this protocol is analyzed. It is shown that a replication of three copies for both assignment packets and acknowledgment packets is optimum for the cases considered.

  4. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    PubMed

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  6. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    NASA Astrophysics Data System (ADS)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  7. Measurement data preprocessing in a radar-based system for monitoring of human movements

    NASA Astrophysics Data System (ADS)

    Morawski, Roman Z.; Miȩkina, Andrzej; Bajurko, Paweł R.

    2015-02-01

    The importance of research on new technologies that could be employed in care services for elderly people is highlighted. The need to examine the applicability of various sensor systems for non-invasive monitoring of the movements and vital bodily functions, such as heart beat or breathing rhythm, of elderly persons in their home environment is justified. An extensive overview of the literature concerning existing monitoring techniques is provided. A technological potential behind radar sensors is indicated. A new class of algorithms for preprocessing of measurement data from impulse radar sensors, when applied for elderly people monitoring, is proposed. Preliminary results of numerical experiments performed on those algorithms are demonstrated.

  8. Fashion sketch design by interactive genetic algorithms

    NASA Astrophysics Data System (ADS)

    Mok, P. Y.; Wang, X. X.; Xu, J.; Kwok, Y. L.

    2012-11-01

    Computer aided design is vitally important for the modern industry, particularly for the creative industry. Fashion industry faced intensive challenges to shorten the product development process. In this paper, a methodology is proposed for sketch design based on interactive genetic algorithms. The sketch design system consists of a sketch design model, a database and a multi-stage sketch design engine. First, a sketch design model is developed based on the knowledge of fashion design to describe fashion product characteristics by using parameters. Second, a database is built based on the proposed sketch design model to define general style elements. Third, a multi-stage sketch design engine is used to construct the design. Moreover, an interactive genetic algorithm (IGA) is used to accelerate the sketch design process. The experimental results have demonstrated that the proposed method is effective in helping laypersons achieve satisfied fashion design sketches.

  9. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  10. Trans-algorithmic nature of learning in biological systems.

    PubMed

    Shimansky, Yury P

    2018-05-02

    Learning ability is a vitally important, distinctive property of biological systems, which provides dynamic stability in non-stationary environments. Although several different types of learning have been successfully modeled using a universal computer, in general, learning cannot be described by an algorithm. In other words, algorithmic approach to describing the functioning of biological systems is not sufficient for adequate grasping of what is life. Since biosystems are parts of the physical world, one might hope that adding some physical mechanisms and principles to the concept of algorithm could provide extra possibilities for describing learning in its full generality. However, a straightforward approach to that through the so-called physical hypercomputation so far has not been successful. Here an alternative approach is proposed. Biosystems are described as achieving enumeration of possible physical compositions though random incremental modifications inflicted on them by active operating resources (AORs) in the environment. Biosystems learn through algorithmic regulation of the intensity of the above modifications according to a specific optimality criterion. From the perspective of external observers, biosystems move in the space of different algorithms driven by random modifications imposed by the environmental AORs. A particular algorithm is only a snapshot of that motion, while the motion itself is essentially trans-algorithmic. In this conceptual framework, death of unfit members of a population, for example, is viewed as a trans-algorithmic modification made in the population as a biosystem by environmental AORs. Numerous examples of AOR utilization in biosystems of different complexity, from viruses to multicellular organisms, are provided.

  11. Fast, Distributed Algorithms in Deep Networks

    DTIC Science & Technology

    2016-05-11

    may not have realized how vital she was in making this project a reality is Professor Crainiceanu. Without knowing who you were, you invited me into...objective function. Training is complete when (2) converges, or stated alternatively , when the difference between t and φL can no longer be...the state-of-the art approaches simply rely on random initialization. We propose an alternative 10 (a) Features in 1-dimensional space (b) Features

  12. An adaptive distributed data aggregation based on RCPC for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hua, Guogang; Chen, Chang Wen

    2006-05-01

    One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks

  13. Automatic extraction of numeric strings in unconstrained handwritten document images

    NASA Astrophysics Data System (ADS)

    Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.

    2011-01-01

    Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.

  14. LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices.

    PubMed

    He, Ziyang; Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan

    2018-04-17

    By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.

  15. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks.

    PubMed

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-07-14

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.

  16. A Depth-Adjustment Deployment Algorithm Based on Two-Dimensional Convex Hull and Spanning Tree for Underwater Wireless Sensor Networks

    PubMed Central

    Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le

    2016-01-01

    Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970

  17. LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices

    PubMed Central

    Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan

    2018-01-01

    By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices. PMID:29673171

  18. Noncontact Sleep Study by Multi-Modal Sensor Fusion.

    PubMed

    Chung, Ku-Young; Song, Kwangsub; Shin, Kangsoo; Sohn, Jinho; Cho, Seok Hyun; Chang, Joon-Hyuk

    2017-07-21

    Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner.

  19. Noncontact Sleep Study by Multi-Modal Sensor Fusion

    PubMed Central

    Chung, Ku-young; Song, Kwangsub; Shin, Kangsoo; Sohn, Jinho; Cho, Seok Hyun; Chang, Joon-Hyuk

    2017-01-01

    Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner. PMID:28753994

  20. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data.

    PubMed

    Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes.

  1. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data

    PubMed Central

    Aorigele; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323

  2. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.

    PubMed

    Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.

  3. A novel method for the detection of R-peaks in ECG based on K-Nearest Neighbors and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    He, Runnan; Wang, Kuanquan; Li, Qince; Yuan, Yongfeng; Zhao, Na; Liu, Yang; Zhang, Henggui

    2017-12-01

    Cardiovascular diseases are associated with high morbidity and mortality. However, it is still a challenge to diagnose them accurately and efficiently. Electrocardiogram (ECG), a bioelectrical signal of the heart, provides crucial information about the dynamical functions of the heart, playing an important role in cardiac diagnosis. As the QRS complex in ECG is associated with ventricular depolarization, therefore, accurate QRS detection is vital for interpreting ECG features. In this paper, we proposed a real-time, accurate, and effective algorithm for QRS detection. In the algorithm, a proposed preprocessor with a band-pass filter was first applied to remove baseline wander and power-line interference from the signal. After denoising, a method combining K-Nearest Neighbor (KNN) and Particle Swarm Optimization (PSO) was used for accurate QRS detection in ECGs with different morphologies. The proposed algorithm was tested and validated using 48 ECG records from MIT-BIH arrhythmia database (MITDB), achieved a high averaged detection accuracy, sensitivity and positive predictivity of 99.43, 99.69, and 99.72%, respectively, indicating a notable improvement to extant algorithms as reported in literatures.

  4. TACD: a transportable ant colony discrimination model for corporate bankruptcy prediction

    NASA Astrophysics Data System (ADS)

    Lalbakhsh, Pooia; Chen, Yi-Ping Phoebe

    2017-05-01

    This paper presents a transportable ant colony discrimination strategy (TACD) to predict corporate bankruptcy, a topic of vital importance that is attracting increasing interest in the field of economics. The proposed algorithm uses financial ratios to build a binary prediction model for companies with the two statuses of bankrupt and non-bankrupt. The algorithm takes advantage of an improved version of continuous ant colony optimisation (CACO) at the core, which is used to create an accurate, simple and understandable linear model for discrimination. This also enables the algorithm to work with continuous values, leading to more efficient learning and adaption by avoiding data discretisation. We conduct a comprehensive performance evaluation on three real-world data sets under a stratified cross-validation strategy. In three different scenarios, TACD is compared with 11 other bankruptcy prediction strategies. We also discuss the efficiency of the attribute selection methods used in the experiments. In addition to its simplicity and understandability, statistical significance tests prove the efficiency of TACD against the other prediction algorithms in both measures of AUC and accuracy.

  5. Comparison of Body Weight Trend Algorithms for Prediction of Heart Failure Related Events in Home Care Setting.

    PubMed

    Eggerth, Alphons; Modre-Osprian, Robert; Hayn, Dieter; Kastner, Peter; Pölzl, Gerhard; Schreier, Günter

    2017-01-01

    Automatic event detection is used in telemedicine based heart failure disease management programs supporting physicians and nurses in monitoring of patients' health data. Analysis of the performance of automatic event detection algorithms for prediction of HF related hospitalisations or diuretic dose increases. Rule-Of-Thumb and Moving Average Convergence Divergence (MACD) algorithm were applied to body weight data from 106 heart failure patients of the HerzMobil-Tirol disease management program. The evaluation criteria were based on Youden index and ROC curves. Analysis of data from 1460 monitoring weeks with 54 events showed a maximum Youden index of 0.19 for MACD and RoT with a specificity > 0.90. Comparison of the two algorithms for real-world monitoring data showed similar results regarding total and limited AUC. An improvement of the sensitivity might be possible by including additional health data (e.g. vital signs and self-reported well-being) because body weight variations obviously are not the only cause of HF related hospitalisations or diuretic dose increases.

  6. Can genetic algorithms help virus writers reshape their creations and avoid detection?

    NASA Astrophysics Data System (ADS)

    Abu Doush, Iyad; Al-Saleh, Mohammed I.

    2017-11-01

    Different attack and defence techniques have been evolved over time as actions and reactions between black-hat and white-hat communities. Encryption, polymorphism, metamorphism and obfuscation are among the techniques used by the attackers to bypass security controls. On the other hand, pattern matching, algorithmic scanning, emulation and heuristic are used by the defence team. The Antivirus (AV) is a vital security control that is used against a variety of threats. The AV mainly scans data against its database of virus signatures. Basically, it claims a virus if a match is found. This paper seeks to find the minimal possible changes that can be made on the virus so that it will appear normal when scanned by the AV. Brute-force search through all possible changes can be a computationally expensive task. Alternatively, this paper tries to apply a Genetic Algorithm in solving such a problem. Our proposed algorithm is tested on seven different malware instances. The results show that in all the tested malware instances only a small change in each instance was good enough to bypass the AV.

  7. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks

    PubMed Central

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-01-01

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666

  8. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks.

    PubMed

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-06-26

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.

  9. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  10. A fast recognition method of warhead target in boost phase using kinematic features

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Xu, Shiyou; Tian, Biao; Wu, Jianhua; Chen, Zengping

    2015-12-01

    The radar targets number increases from one to more when the ballistic missile is in the process of separating the lower stage rocket or casting covers or other components. It is vital to identify the warhead target quickly among these multiple targets for radar tracking. A fast recognition method of the warhead target is proposed to solve this problem by using kinematic features, utilizing fuzzy comprehensive method and information fusion method. In order to weaken the influence of radar measurement noise, an extended Kalman filter with constant jerk model (CJEKF) is applied to obtain more accurate target's motion information. The simulation shows the validity of the algorithm and the effects of the radar measurement precision upon the algorithm's performance.

  11. Human movement tracking based on Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Luo, Yuan

    2006-11-01

    During the rehabilitation process of the post-stroke patients is conducted, their movements need to be localized and learned so that incorrect movement can be instantly modified or tuned. Therefore, tracking these movement becomes vital and necessary for the rehabilitative course. In the technologies of human movement tracking, the position prediction of human movement is very important. In this paper, we first analyze the configuration of the human movement system and choice of sensors. Then, The Kalman filter algorithm and its modified algorithm are proposed and to be used to predict the position of human movement. In the end, on the basis of analyzing the performance of the method, it is clear that the method described can be used to the system of human movement tracking.

  12. A retrieval algorithm of hydrometer profile for submillimeter-wave radiometer

    NASA Astrophysics Data System (ADS)

    Liu, Yuli; Buehler, Stefan; Liu, Heguang

    2017-04-01

    Vertical profiles of particle microphysics perform vital functions for the estimation of climatic feedback. This paper proposes a new algorithm to retrieve the profile of the parameters of the hydrometeor(i.e., ice, snow, rain, liquid cloud, graupel) based on passive submillimeter-wave measurements. These parameters include water content and particle size. The first part of the algorithm builds the database and retrieves the integrated quantities. Database is built up by Atmospheric Radiative Transfer Simulator(ARTS), which uses atmosphere data to simulate the corresponding brightness temperature. Neural network, trained by the precalculated database, is developed to retrieve the water path for each type of particles. The second part of the algorithm analyses the statistical relationship between water path and vertical parameters profiles. Based on the strong dependence existing between vertical layers in the profiles, Principal Component Analysis(PCA) technique is applied. The third part of the algorithm uses the forward model explicitly to retrieve the hydrometeor profiles. Cost function is calculated in each iteration, and Differential Evolution(DE) algorithm is used to adjust the parameter values during the evolutionary process. The performance of this algorithm is planning to be verified for both simulation database and measurement data, by retrieving profiles in comparison with the initial one. Results show that this algorithm has the ability to retrieve the hydrometeor profiles efficiently. The combination of ARTS and optimization algorithm can get much better results than the commonly used database approach. Meanwhile, the concept that ARTS can be used explicitly in the retrieval process shows great potential in providing solution to other retrieval problems.

  13. Smart Helmet: Wearable Multichannel ECG and EEG

    PubMed Central

    Chanwimalueang, Theerasak; Goverdovsky, Valentin; Looney, David; Sharp, David; Mandic, Danilo P.

    2016-01-01

    Modern wearable technologies have enabled continuous recording of vital signs, however, for activities such as cycling, motor-racing, or military engagement, a helmet with embedded sensors would provide maximum convenience and the opportunity to monitor simultaneously both the vital signs and the electroencephalogram (EEG). To this end, we investigate the feasibility of recording the electrocardiogram (ECG), respiration, and EEG from face-lead locations, by embedding multiple electrodes within a standard helmet. The electrode positions are at the lower jaw, mastoids, and forehead, while for validation purposes a respiration belt around the thorax and a reference ECG from the chest serve as ground truth to assess the performance. The within-helmet EEG is verified by exposing the subjects to periodic visual and auditory stimuli and screening the recordings for the steady-state evoked potentials in response to these stimuli. Cycling and walking are chosen as real-world activities to illustrate how to deal with the so-induced irregular motion artifacts, which contaminate the recordings. We also propose a multivariate R-peak detection algorithm suitable for such noisy environments. Recordings in real-world scenarios support a proof of concept of the feasibility of recording vital signs and EEG from the proposed smart helmet. PMID:27957405

  14. Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas.

    PubMed

    Schires, Elliott; Georgiou, Pantelis; Lande, Tor Sverre

    2018-04-01

    Radar devices can be used in nonintrusive situations to monitor vital sign, through clothes or behind walls. By detecting and extracting body motion linked to physiological activity, accurate simultaneous estimations of both heart rate (HR) and respiration rate (RR) is possible. However, most research to date has focused on front monitoring of superficial motion of the chest. In this paper, body penetration of electromagnetic (EM) wave is investigated to perform back monitoring of human subjects. Using body-coupled antennas and an ultra-wideband (UWB) pulsed radar, in-body monitoring of lungs and heart motion was achieved. An optimised location of measurement in the back of a subject is presented, to enhance signal-to-noise ratio and limit attenuation of reflected radar signals. Phase-based detection techniques are then investigated for back measurements of vital sign, in conjunction with frequency estimation methods that reduce the impact of parasite signals. Finally, an algorithm combining these techniques is presented to allow robust and real-time estimation of both HR and RR. Static and dynamic tests were conducted, and demonstrated the possibility of using this sensor in future health monitoring systems, especially in the form of a smart car seat for driver monitoring.

  15. Mining Productive-Associated Periodic-Frequent Patterns in Body Sensor Data for Smart Home Care

    PubMed Central

    Ismail, Walaa N.; Hassan, Mohammad Mehedi

    2017-01-01

    The understanding of various health-oriented vital sign data generated from body sensor networks (BSNs) and discovery of the associations between the generated parameters is an important task that may assist and promote important decision making in healthcare. For example, in a smart home scenario where occupants’ health status is continuously monitored remotely, it is essential to provide the required assistance when an unusual or critical situation is detected in their vital sign data. In this paper, we present an efficient approach for mining the periodic patterns obtained from BSN data. In addition, we employ a correlation test on the generated patterns and introduce productive-associated periodic-frequent patterns as the set of correlated periodic-frequent items. The combination of these measures has the advantage of empowering healthcare providers and patients to raise the quality of diagnosis as well as improve treatment and smart care, especially for elderly people in smart homes. We develop an efficient algorithm named PPFP-growth (Productive Periodic-Frequent Pattern-growth) to discover all productive-associated periodic frequent patterns using these measures. PPFP-growth is efficient and the productiveness measure removes uncorrelated periodic items. An experimental evaluation on synthetic and real datasets shows the efficiency of the proposed PPFP-growth algorithm, which can filter a huge number of periodic patterns to reveal only the correlated ones. PMID:28445441

  16. Mining Productive-Associated Periodic-Frequent Patterns in Body Sensor Data for Smart Home Care.

    PubMed

    Ismail, Walaa N; Hassan, Mohammad Mehedi

    2017-04-26

    The understanding of various health-oriented vital sign data generated from body sensor networks (BSNs) and discovery of the associations between the generated parameters is an important task that may assist and promote important decision making in healthcare. For example, in a smart home scenario where occupants' health status is continuously monitored remotely, it is essential to provide the required assistance when an unusual or critical situation is detected in their vital sign data. In this paper, we present an efficient approach for mining the periodic patterns obtained from BSN data. In addition, we employ a correlation test on the generated patterns and introduce productive-associated periodic-frequent patterns as the set of correlated periodic-frequent items. The combination of these measures has the advantage of empowering healthcare providers and patients to raise the quality of diagnosis as well as improve treatment and smart care, especially for elderly people in smart homes. We develop an efficient algorithm named PPFP-growth (Productive Periodic-Frequent Pattern-growth) to discover all productive-associated periodic frequent patterns using these measures. PPFP-growth is efficient and the productiveness measure removes uncorrelated periodic items. An experimental evaluation on synthetic and real datasets shows the efficiency of the proposed PPFP-growth algorithm, which can filter a huge number of periodic patterns to reveal only the correlated ones.

  17. [A new information technology for system diagnosis of functional activity of human organs].

    PubMed

    Avshalumov, A Sh; Sudakov, K V; Filaretov, G F

    2006-01-01

    The goal of this work was to consider a new diagnostic technology based on analysis of objective information parameters of functional activity and interaction of normal and pathologically changed human organs. The technology is based on the use of very low power millimeter (EHF) radiation emitted by human body and other biological objects in the process of vital activity. The importance of consideration of the information aspect of vital activity from the standpoint of the theory of functional systems suggested by P. K. Anokhin is emphasized. The suggested information technology is theoretically substantiated. The capabilities of the suggested technology for diagnosis, as well as the difficulties of its practical implementation caused by very low power of electromagnetic fields generated by human body, are discussed. It is noted that only use of modern radiophysical equipment together with new software based on specially developed algorithms made it possible to construct a medical EHF diagnostic system for effective implementation of the suggested technology. The system structure, functions of its components, the examination procedure, and the form of representation of diagnostic information are described together with the specific features of applied software based on the principle of maximal objectivity of analysis and interpretation of the results of diagnosis on the basis of artificial intelligence algorithms. The diagnostic capabilities of the system are illustrated by several examples.

  18. Implementation of a Smart Phone for Motion Analysis.

    PubMed

    Yodpijit, Nantakrit; Songwongamarit, Chalida; Tavichaiyuth, Nicha

    2015-01-01

    In today’s information-rich environment, one of the most popular devices is a smartphone. Research has shown significant growth in the use of smartphones and apps all over the world. Accelerometer within smartphone is a motion sensor that can be used to detect human movements. Compared to other major vital signs, gait characteristics represent general health status, and can be determined using smartphones. The objective of the current study is to design and develop the alternative technology that can potentially predict health status and reduce healthcare cost. This study uses a smartphone as a wireless accelerometer for quantifying human motion characteristics from four steps of the system design and development (data acquisition operation, feature extraction algorithm, classifier design, and decision making strategy). Findings indicate that it is possible to extract features from a smartphone’s accelerometer using a peak detection algorithm. Gait characteristics obtain from the peak detection algorithm include stride time, stance time, swing time and cadence. Applications and limitations of this study are also discussed.

  19. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  20. Experimental setup for evaluating an adaptive user interface for teleoperation control

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.

    2017-05-01

    A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.

  1. Phenotyping and Visualizing Infusion-Related Reactions for Breast Cancer Patients.

    PubMed

    Sun, Deyu; Sarda, Gopal; Skube, Steven J; Blaes, Anne H; Khairat, Saif; Melton, Genevieve B; Zhang, Rui

    2017-01-01

    Infusion-related reactions (IRRs) are typical adverse events for breast cancer patients. Detecting IRRs and visualizing their occurance associated with the drug treatment would potentially assist clinicians to improve patient safety and help researchers model IRRs and analyze their risk factors. We developed and evaluated a phenotyping algorithm to detect IRRs for breast cancer patients. We also designed a visualization prototype to render IRR patients' medications, lab tests and vital signs over time. By comparing with the 42 randomly selected doses that are manually labeled by a domain expert, the sensitivity, positive predictive value, specificity, and negative predictive value of the algorithms are 69%, 60%, 79%, and 85%, respectively. Using the algorithm, an incidence of 6.4% of patients and 1.8% of doses for docetaxel, 8.7% and 3.2% for doxorubicin, 10.4% and 1.2% for paclitaxel, 16.1% and 1.1% for trastuzumab were identified retrospectively. The incidences estimated are consistent with related studies.

  2. Phenotyping and Visualizing Infusion-Related Reactions for Breast Cancer Patients

    PubMed Central

    Sun, Deyu; Sarda, Gopal; Skube, Steven J.; Blaes, Anne H.; Khairat, Saif; Melton, Genevieve B.; Zhang, Rui

    2018-01-01

    Infusion-related reactions (IRRs) are typical adverse events for breast cancer patients. Detecting IRRs and visualizing their occurance associated with the drug treatment would potentially assist clinicians to improve patient safety and help researchers model IRRs and analyze their risk factors. We developed and evaluated a phenotyping algorithm to detect IRRs for breast cancer patients. We also designed a visualization prototype to render IRR patients’ medications, lab tests and vital signs over time. By comparing with the 42 randomly selected doses that are manually labeled by a domain expert, the sensitivity, positive predictive value, specificity, and negative predictive value of the algorithms are 69%, 60%, 79%, and 85%, respectively. Using the algorithm, an incidence of 6.4% of patients and 1.8% of doses for docetaxel, 8.7% and 3.2% for doxorubicin, 10.4% and 1.2% for paclitaxel, 16.1% and 1.1% for trastuzumab were identified retrospectively. The incidences estimated are consistent with related studies. PMID:29295166

  3. Spot measurement of heart rate based on morphology of PhotoPlethysmoGraphic (PPG) signals.

    PubMed

    Madhan Mohan, P; Nagarajan, V; Vignesh, J C

    2017-02-01

    Due to increasing health consciousness among people, it is imperative to have low-cost health care devices to measure the vital parameters like heart rate and arterial oxygen saturation (SpO 2 ). In this paper, an efficient heart rate monitoring algorithm based on the morphology of photoplethysmography (PPG) signals to measure the spot heart rate (HR) and its real-time implementation is proposed. The algorithm does pre-processing and detects the onsets and systolic peaks of the PPG signal to estimate the heart rate of the subject. Since the algorithm is based on the morphology of the signal, it works well when the subject is not moving, which is a typical test case. So, this algorithm is developed mainly to measure the heart rate at on-demand applications. Real-time experimental results indicate the heart rate accuracy of 99.5%, mean absolute percentage error (MAPE) of 1.65%, mean absolute error (MAE) of 1.18 BPM and reference closeness factor (RCF) of 0.988. The results further show that the average response time of the algorithm to give the spot HR is 6.85 s, so that the users need not wait longer to see their HR. The hardware implementation results show that the algorithm only requires 18 KBytes of total memory and runs at high speed with 0.85 MIPS. So, this algorithm can be targeted to low-cost embedded platforms.

  4. Maintaining and Enhancing Diversity of Sampled Protein Conformations in Robotics-Inspired Methods.

    PubMed

    Abella, Jayvee R; Moll, Mark; Kavraki, Lydia E

    2018-01-01

    The ability to efficiently sample structurally diverse protein conformations allows one to gain a high-level view of a protein's energy landscape. Algorithms from robot motion planning have been used for conformational sampling, and several of these algorithms promote diversity by keeping track of "coverage" in conformational space based on the local sampling density. However, large proteins present special challenges. In particular, larger systems require running many concurrent instances of these algorithms, but these algorithms can quickly become memory intensive because they typically keep previously sampled conformations in memory to maintain coverage estimates. In addition, robotics-inspired algorithms depend on defining useful perturbation strategies for exploring the conformational space, which is a difficult task for large proteins because such systems are typically more constrained and exhibit complex motions. In this article, we introduce two methodologies for maintaining and enhancing diversity in robotics-inspired conformational sampling. The first method addresses algorithms based on coverage estimates and leverages the use of a low-dimensional projection to define a global coverage grid that maintains coverage across concurrent runs of sampling. The second method is an automatic definition of a perturbation strategy through readily available flexibility information derived from B-factors, secondary structure, and rigidity analysis. Our results show a significant increase in the diversity of the conformations sampled for proteins consisting of up to 500 residues when applied to a specific robotics-inspired algorithm for conformational sampling. The methodologies presented in this article may be vital components for the scalability of robotics-inspired approaches.

  5. A new compound arithmetic crossover-based genetic algorithm for constrained optimisation in enterprise systems

    NASA Astrophysics Data System (ADS)

    Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu

    2017-01-01

    In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.

  6. Convalescing Cluster Configuration Using a Superlative Framework

    PubMed Central

    Sabitha, R.; Karthik, S.

    2015-01-01

    Competent data mining methods are vital to discover knowledge from databases which are built as a result of enormous growth of data. Various techniques of data mining are applied to obtain knowledge from these databases. Data clustering is one such descriptive data mining technique which guides in partitioning data objects into disjoint segments. K-means algorithm is a versatile algorithm among the various approaches used in data clustering. The algorithm and its diverse adaptation methods suffer certain problems in their performance. To overcome these issues a superlative algorithm has been proposed in this paper to perform data clustering. The specific feature of the proposed algorithm is discretizing the dataset, thereby improving the accuracy of clustering, and also adopting the binary search initialization method to generate cluster centroids. The generated centroids are fed as input to K-means approach which iteratively segments the data objects into respective clusters. The clustered results are measured for accuracy and validity. Experiments conducted by testing the approach on datasets from the UC Irvine Machine Learning Repository evidently show that the accuracy and validity measure is higher than the other two approaches, namely, simple K-means and Binary Search method. Thus, the proposed approach proves that discretization process will improve the efficacy of descriptive data mining tasks. PMID:26543895

  7. On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms

    PubMed Central

    He, Li; Zheng, Hao; Wang, Lei

    2017-01-01

    Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546

  8. Architecture and implementation considerations of a high-speed Viterbi decoder for a Reed-Muller subcode

    NASA Technical Reports Server (NTRS)

    Lin, Shu (Principal Investigator); Uehara, Gregory T.; Nakamura, Eric; Chu, Cecilia W. P.

    1996-01-01

    The (64, 40, 8) subcode of the third-order Reed-Muller (RM) code for high-speed satellite communications is proposed. The RM subcode can be used either alone or as an inner code of a concatenated coding system with the NASA standard (255, 233, 33) Reed-Solomon (RS) code as the outer code to achieve high performance (or low bit-error rate) with reduced decoding complexity. It can also be used as a component code in a multilevel bandwidth efficient coded modulation system to achieve reliable bandwidth efficient data transmission. The progress made toward achieving the goal of implementing a decoder system based upon this code is summarized. The development of the integrated circuit prototype sub-trellis IC, particularly focusing on the design methodology, is addressed.

  9. Performance of convolutionally encoded noncoherent MFSK modem in fading channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.

    1976-01-01

    The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.

  10. Performance analysis of a dual-tree algorithm for computing spatial distance histograms

    PubMed Central

    Chen, Shaoping; Tu, Yi-Cheng; Xia, Yuni

    2011-01-01

    Many scientific and engineering fields produce large volume of spatiotemporal data. The storage, retrieval, and analysis of such data impose great challenges to database systems design. Analysis of scientific spatiotemporal data often involves computing functions of all point-to-point interactions. One such analytics, the Spatial Distance Histogram (SDH), is of vital importance to scientific discovery. Recently, algorithms for efficient SDH processing in large-scale scientific databases have been proposed. These algorithms adopt a recursive tree-traversing strategy to process point-to-point distances in the visited tree nodes in batches, thus require less time when compared to the brute-force approach where all pairwise distances have to be computed. Despite the promising experimental results, the complexity of such algorithms has not been thoroughly studied. In this paper, we present an analysis of such algorithms based on a geometric modeling approach. The main technique is to transform the analysis of point counts into a problem of quantifying the area of regions where pairwise distances can be processed in batches by the algorithm. From the analysis, we conclude that the number of pairwise distances that are left to be processed decreases exponentially with more levels of the tree visited. This leads to the proof of a time complexity lower than the quadratic time needed for a brute-force algorithm and builds the foundation for a constant-time approximate algorithm. Our model is also general in that it works for a wide range of point spatial distributions, histogram types, and space-partitioning options in building the tree. PMID:21804753

  11. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  12. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi

    PubMed Central

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-01-01

    Background Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. Objective To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. Methods We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. Results 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. Conclusions The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. PMID:25877290

  13. Validation of accelerometer wear and nonwear time classification algorithm.

    PubMed

    Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S

    2011-02-01

    the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.

  14. Improved Cost-Base Design of Water Distribution Networks using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Moradzadeh Azar, Foad; Abghari, Hirad; Taghi Alami, Mohammad; Weijs, Steven

    2010-05-01

    Population growth and progressive extension of urbanization in different places of Iran cause an increasing demand for primary needs. The water, this vital liquid is the most important natural need for human life. Providing this natural need is requires the design and construction of water distribution networks, that incur enormous costs on the country's budget. Any reduction in these costs enable more people from society to access extreme profit least cost. Therefore, investment of Municipal councils need to maximize benefits or minimize expenditures. To achieve this purpose, the engineering design depends on the cost optimization techniques. This paper, presents optimization models based on genetic algorithm(GA) to find out the minimum design cost Mahabad City's (North West, Iran) water distribution network. By designing two models and comparing the resulting costs, the abilities of GA were determined. the GA based model could find optimum pipe diameters to reduce the design costs of network. Results show that the water distribution network design using Genetic Algorithm could lead to reduction of at least 7% in project costs in comparison to the classic model. Keywords: Genetic Algorithm, Optimum Design of Water Distribution Network, Mahabad City, Iran.

  15. A Stochastic Framework for Evaluating Seizure Prediction Algorithms Using Hidden Markov Models

    PubMed Central

    Wong, Stephen; Gardner, Andrew B.; Krieger, Abba M.; Litt, Brian

    2007-01-01

    Responsive, implantable stimulation devices to treat epilepsy are now in clinical trials. New evidence suggests that these devices may be more effective when they deliver therapy before seizure onset. Despite years of effort, prospective seizure prediction, which could improve device performance, remains elusive. In large part, this is explained by lack of agreement on a statistical framework for modeling seizure generation and a method for validating algorithm performance. We present a novel stochastic framework based on a three-state hidden Markov model (HMM) (representing interictal, preictal, and seizure states) with the feature that periods of increased seizure probability can transition back to the interictal state. This notion reflects clinical experience and may enhance interpretation of published seizure prediction studies. Our model accommodates clipped EEG segments and formalizes intuitive notions regarding statistical validation. We derive equations for type I and type II errors as a function of the number of seizures, duration of interictal data, and prediction horizon length and we demonstrate the model’s utility with a novel seizure detection algorithm that appeared to predicted seizure onset. We propose this framework as a vital tool for designing and validating prediction algorithms and for facilitating collaborative research in this area. PMID:17021032

  16. The Ensemble Kalman filter: a signal processing perspective

    NASA Astrophysics Data System (ADS)

    Roth, Michael; Hendeby, Gustaf; Fritsche, Carsten; Gustafsson, Fredrik

    2017-12-01

    The ensemble Kalman filter (EnKF) is a Monte Carlo-based implementation of the Kalman filter (KF) for extremely high-dimensional, possibly nonlinear, and non-Gaussian state estimation problems. Its ability to handle state dimensions in the order of millions has made the EnKF a popular algorithm in different geoscientific disciplines. Despite a similarly vital need for scalable algorithms in signal processing, e.g., to make sense of the ever increasing amount of sensor data, the EnKF is hardly discussed in our field. This self-contained review is aimed at signal processing researchers and provides all the knowledge to get started with the EnKF. The algorithm is derived in a KF framework, without the often encountered geoscientific terminology. Algorithmic challenges and required extensions of the EnKF are provided, as well as relations to sigma point KF and particle filters. The relevant EnKF literature is summarized in an extensive survey and unique simulation examples, including popular benchmark problems, complement the theory with practical insights. The signal processing perspective highlights new directions of research and facilitates the exchange of potentially beneficial ideas, both for the EnKF and high-dimensional nonlinear and non-Gaussian filtering in general.

  17. Validation of Automated Prediction of Blood Product Needs Algorithm Processing Continuous Non Invasive Vital Signs Streams (ONPOINT4)

    DTIC Science & Technology

    2018-01-25

    ORGANIZATION NAME(S) AND ADDRESS(ES) University of Maryland, Baltimore 22 S. Greene St. R Adams Cowley Shock Trauma Center, T5R46 Baltimore, MD 21201 8...the revised trauma score, shock index (= heart rate/systolic blood pressure), and assessment of blood consumption, our M2 (bleeding risk index...11 4.2 Transfusion Prediction Model Evaluation in Special Subsets (Model Stress Test) ....... 15 4.3 Feature Sets and Model Stability

  18. Increasing the security at vital infrastructures: automated detection of deviant behaviors

    NASA Astrophysics Data System (ADS)

    Burghouts, Gertjan J.; den Hollander, Richard; Schutte, Klamer; Marck, Jan-Willem; Landsmeer, Sander; den Breejen, Eric

    2011-06-01

    This paper discusses the decomposition of hostile intentions into abnormal behaviors. A list of such behaviors has been compiled for the specific case of public transport. Some of the deviant behaviors are hard to observe by people, as they are in the midst of the crowd. Examples are deviant walking patterns, prohibited actions such as taking photos and waiting without taking the train. We discuss our visual analytics algorithms and demonstrate them on CCTV footage from the Amsterdam train station.

  19. A New Paradigm of Technology-Enabled ‘Vital Signs’ for Early Detection of Health Change for Older Adults.

    PubMed

    Rantz, Marilyn J; Skubic, Marjorie; Popescu, Mihail; Galambos, Colleen; Koopman, Richelle J; Alexander, Gregory L; Phillips, Lorraine J; Musterman, Katy; Back, Jessica; Miller, Steven J

    2015-01-01

    Environmentally embedded (nonwearable) sensor technology is in continuous use in elder housing to monitor a new set of ‘vital signs' that continuously measure the functional status of older adults, detect potential changes in health or functional status, and alert healthcare providers for early recognition and treatment of those changes. Older adult participants' respiration, pulse, and restlessness are monitored as they sleep. Gait speed, stride length, and stride time are calculated daily, and automatically assess for increasing fall risk. Activity levels are summarized and graphically displayed for easy interpretation. Falls are detected when they occur and alerts are sent immediately to healthcare providers, so time to rescue may be reduced. Automated health alerts are sent to healthcare staff, based on continuously running algorithms applied to the sensor data, days and weeks before typical signs or symptoms are detected by the person, family members, or healthcare providers. Discovering these new functional status ‘vital signs', developing automated methods for interpreting them, and alerting others when changes occur have the potential to transform chronic illness management and facilitate aging in place through the end of life. Key findings of research in progress at the University of Missouri are discussed in this viewpoint article, as well as obstacles to widespread adoption.

  20. Internet of Health Things: Toward intelligent vital signs monitoring in hospital wards.

    PubMed

    da Costa, Cristiano André; Pasluosta, Cristian F; Eskofier, Björn; da Silva, Denise Bandeira; da Rosa Righi, Rodrigo

    2018-06-02

    Large amounts of patient data are routinely manually collected in hospitals by using standalone medical devices, including vital signs. Such data is sometimes stored in spreadsheets, not forming part of patients' electronic health records, and is therefore difficult for caregivers to combine and analyze. One possible solution to overcome these limitations is the interconnection of medical devices via the Internet using a distributed platform, namely the Internet of Things. This approach allows data from different sources to be combined in order to better diagnose patient health status and identify possible anticipatory actions. This work introduces the concept of the Internet of Health Things (IoHT), focusing on surveying the different approaches that could be applied to gather and combine data on vital signs in hospitals. Common heuristic approaches are considered, such as weighted early warning scoring systems, and the possibility of employing intelligent algorithms is analyzed. As a result, this article proposes possible directions for combining patient data in hospital wards to improve efficiency, allow the optimization of resources, and minimize patient health deterioration. It is concluded that a patient-centered approach is critical, and that the IoHT paradigm will continue to provide more optimal solutions for patient management in hospital wards. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Differences in spirometry interpretation algorithms: influence on decision making among primary-care physicians.

    PubMed

    He, Xiao-Ou; D'Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan

    2015-03-12

    Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. We examined how two different SIAs may influence decision making among primary-care physicians. Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a 'normal' interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently.

  2. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  3. Long-term spatial distributions and trends of the latent heat fluxes over the global cropland ecosystem using multiple satellite-based models

    PubMed Central

    Feng, Fei; Yao, Yunjun; Liu, Meng

    2017-01-01

    Estimating cropland latent heat flux (LE) from continental to global scales is vital to modeling crop production and managing water resources. Over the past several decades, numerous LE models were developed, such as the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing-based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL) and the modified satellite-based Priestley-Taylor LE algorithm (MS-PT). However, these LE models have not been directly compared over the global cropland ecosystem using various algorithms. In this study, we evaluated the performances of these four LE models using 34 eddy covariance (EC) sites. The results showed that mean annual LE for cropland varied from 33.49 to 58.97 W/m2 among the four models. The interannual LE slightly increased during 1982–2009 across the global cropland ecosystem. All models had acceptable performances with the coefficient of determination (R2) ranging from 0.4 to 0.7 and a root mean squared error (RMSE) of approximately 35 W/m2. MS-PT had good overall performance across the cropland ecosystem with the highest R2, lowest RMSE and a relatively low bias. The reduced performances of MOD16 and RRS, with R2 ranging from 0.4 to 0.6 and RMSEs from 30 to 39 W/m2, might be attributed to empirical parameters in the structure algorithms and calibrated coefficients. PMID:28837704

  4. Routing and Scheduling Algorithms for WirelessHART Networks: A Survey

    PubMed Central

    Nobre, Marcelo; Silva, Ivanovitch; Guedes, Luiz Affonso

    2015-01-01

    Wireless communication is a trend nowadays for the industrial environment. A number of different technologies have emerged as solutions satisfying strict industrial requirements (e.g., WirelessHART, ISA100.11a, WIA-PA). As the industrial environment presents a vast range of applications, adopting an adequate solution for each case is vital to obtain good performance of the system. In this context, the routing and scheduling schemes associated with these technologies have a direct impact on important features, like latency and energy consumption. This situation has led to the development of a vast number of routing and scheduling schemes. In the present paper, we focus on the WirelessHART technology, emphasizing its most important routing and scheduling aspects in order to guide both end users and the developers of new algorithms. Furthermore, we provide a detailed literature review of the newest routing and scheduling techniques for WirelessHART, discussing each of their features. These routing algorithms have been evaluated in terms of their objectives, metrics, the usage of the WirelessHART structures and validation method. In addition, the scheduling algorithms were also evaluated by metrics, validation, objectives and, in addition, by multiple superframe support, as well as by the redundancy method used. Moreover, this paper briefly presents some insights into the main WirelessHART simulation modules available, in order to provide viable test platforms for the routing and scheduling algorithms. Finally, some open issues in WirelessHART routing and scheduling algorithms are discussed. PMID:25919371

  5. Basis for a neuronal version of Grover's quantum algorithm

    PubMed Central

    Clark, Kevin B.

    2014-01-01

    Grover's quantum (search) algorithm exploits principles of quantum information theory and computation to surpass the strong Church–Turing limit governing classical computers. The algorithm initializes a search field into superposed N (eigen)states to later execute nonclassical “subroutines” involving unitary phase shifts of measured states and to produce root-rate or quadratic gain in the algorithmic time (O(N1/2)) needed to find some “target” solution m. Akin to this fast technological search algorithm, single eukaryotic cells, such as differentiated neurons, perform natural quadratic speed-up in the search for appropriate store-operated Ca2+ response regulation of, among other processes, protein and lipid biosynthesis, cell energetics, stress responses, cell fate and death, synaptic plasticity, and immunoprotection. Such speed-up in cellular decision making results from spatiotemporal dynamics of networked intracellular Ca2+-induced Ca2+ release and the search (or signaling) velocity of Ca2+ wave propagation. As chemical processes, such as the duration of Ca2+ mobilization, become rate-limiting over interstore distances, Ca2+ waves quadratically decrease interstore-travel time from slow saltatory to fast continuous gradients proportional to the square-root of the classical Ca2+ diffusion coefficient, D1/2, matching the computing efficiency of Grover's quantum algorithm. In this Hypothesis and Theory article, I elaborate on these traits using a fire-diffuse-fire model of store-operated cytosolic Ca2+ signaling valid for glutamatergic neurons. Salient model features corresponding to Grover's quantum algorithm are parameterized to meet requirements for the Oracle Hadamard transform and Grover's iteration. A neuronal version of Grover's quantum algorithm figures to benefit signal coincidence detection and integration, bidirectional synaptic plasticity, and other vital cell functions by rapidly selecting, ordering, and/or counting optional response regulation choices. PMID:24860419

  6. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  7. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    NASA Astrophysics Data System (ADS)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  8. Diffusion-based recommendation with trust relations on tripartite graphs

    NASA Astrophysics Data System (ADS)

    Wang, Ximeng; Liu, Yun; Zhang, Guangquan; Xiong, Fei; Lu, Jie

    2017-08-01

    The diffusion-based recommendation approach is a vital branch in recommender systems, which successfully applies physical dynamics to make recommendations for users on bipartite or tripartite graphs. Trust links indicate users’ social relations and can provide the benefit of reducing data sparsity. However, traditional diffusion-based algorithms only consider rating links when making recommendations. In this paper, the complementarity of users’ implicit and explicit trust is exploited, and a novel resource-allocation strategy is proposed, which integrates these two kinds of trust relations on tripartite graphs. Through empirical studies on three benchmark datasets, our proposed method obtains better performance than most of the benchmark algorithms in terms of accuracy, diversity and novelty. According to the experimental results, our method is an effective and reasonable way to integrate additional features into the diffusion-based recommendation approach.

  9. Autonomous exploration and mapping of unknown environments

    NASA Astrophysics Data System (ADS)

    Owens, Jason; Osteen, Phil; Fields, MaryAnne

    2012-06-01

    Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and intuitive representation of the environment for the remote operator. We have integrated a robust mapping and exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.

  10. A new VLSI architecture for a single-chip-type Reed-Solomon decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.

    1989-01-01

    A new very large scale integration (VLSI) architecture for implementing Reed-Solomon (RS) decoders that can correct both errors and erasures is described. This new architecture implements a Reed-Solomon decoder by using replication of a single VLSI chip. It is anticipated that this single chip type RS decoder approach will save substantial development and production costs. It is estimated that reduction in cost by a factor of four is possible with this new architecture. Furthermore, this Reed-Solomon decoder is programmable between 8 bit and 10 bit symbol sizes. Therefore, both an 8 bit Consultative Committee for Space Data Systems (CCSDS) RS decoder and a 10 bit decoder are obtained at the same time, and when concatenated with a (15,1/6) Viterbi decoder, provide an additional 2.1-dB coding gain.

  11. In Brief: Air pollution app

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2010-10-01

    A new smartphone application takes advantage of various technological capabilities and sensors to help users monitor air quality. Tapping into smartphone cameras, Global Positioning System (GPS) sensors, compasses, and accelerometers, computer scientists with the University of Southern California's (USC) Viterbi School of Engineering have developed a new application, provisionally entitled “Visibility.” Currently available for the Android telephone operating system, the application is available for free download at http://robotics.usc.edu/˜mobilesensing/Projects/AirVisibilityMonitoring. An iPhone application may be introduced soon. Smartphone users can take a picture of the sky and then compare it with models of sky luminance to estimate visibility. While conventional air pollution monitors are costly and thinly deployed in some areas, the smartphone application potentially could help fill in some blanks in existing air pollution maps, according to USC computer science professor Gaurav Sukhatme.

  12. Decoder synchronization for deep space missions

    NASA Technical Reports Server (NTRS)

    Statman, J. I.; Cheung, K.-M.; Chauvin, T. H.; Rabkin, J.; Belongie, M. L.

    1994-01-01

    The Consultative Committee for Space Data Standards (CCSDS) recommends that space communication links employ a concatenated, error-correcting, channel-coding system in which the inner code is a convolutional (7,1/2) code and the outer code is a (255,223) Reed-Solomon code. The traditional implementation is to perform the node synchronization for the Viterbi decoder and the frame synchronization for the Reed-Solomon decoder as separate, sequential operations. This article discusses a unified synchronization technique that is required for deep space missions that have data rates and signal-to-noise ratios (SNR's) that are extremely low. This technique combines frame synchronization in the bit and symbol domains and traditional accumulated-metric growth techniques to establish a joint frame and node synchronization. A variation on this technique is used for the Galileo spacecraft on its Jupiter-bound mission.

  13. Effective Association of SAR and AIS Data Using Non-Rigid Point Pattern Matching

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Ji, K. F.; Xing, X. W.; Zou, H. X.

    2014-03-01

    Ship surveillance using multiple remote sensing sensors becomes more and more vital presently. Among the various sensors, space-borne Synthetic Aperture Radar (SAR) is optimal for its high resolution over wide swaths and all-weather working capabilities. Meanwhile, Automatic Identification System (AIS) is efficient to provide ship navigational information. Limited to the progress of ship surveillance using SAR image only, the integration of them significantly benefits more. Data association is the fundamental issue. Many algorithms have been developed including the Nearest-Neighbour (NN) algorithm, the Joint Probabilistic Data Association (JPDA) method, and the Multiple Hypothesis Testing (MHT) approach. Ship positions derived from SAR image can be associated with the ones provided by AIS. State-of-the-art method (NN algorithm) is proved to be feasible. But it faces more challenges under adverse circumstances, such as high-density-shipping condition. We investigate the non-rigid Point Pattern Matching (PPM) method to solve this problem. To the best of our knowledge, this paper is the first to introduce non-rigid PPM to the data association of SAR and AIS. On the basis of introduction to the data association, Coherent Point Drift (CPD) algorithm is investigated. Experiments are carried out and the results illustrate that the CPD algorithm achieves higher accuracy and outperforms state-of-the-art method, especially under high-density-shipping condition.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wylie, Brian Neil; Moreland, Kenneth D.

    Graphs are a vital way of organizing data with complex correlations. A good visualization of a graph can fundamentally change human understanding of the data. Consequently, there is a rich body of work on graph visualization. Although there are many techniques that are effective on small to medium sized graphs (tens of thousands of nodes), there is a void in the research for visualizing massive graphs containing millions of nodes. Sandia is one of the few entities in the world that has the means and motivation to handle data on such a massive scale. For example, homeland security generates graphsmore » from prolific media sources such as television, telephone, and the Internet. The purpose of this project is to provide the groundwork for visualizing such massive graphs. The research provides for two major feature gaps: a parallel, interactive visualization framework and scalable algorithms to make the framework usable to a practical application. Both the frameworks and algorithms are designed to run on distributed parallel computers, which are already available at Sandia. Some features are integrated into the ThreatView{trademark} application and future work will integrate further parallel algorithms.« less

  15. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications.

    PubMed

    Gangadari, Bhoopal Rao; Rafi Ahamed, Shaik

    2016-09-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA 2 ) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA 2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA 2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA 2 based S-Box have comparatively better performance than that of conventional LUT based S-Box.

  16. Design of cryptographically secure AES like S-Box using second-order reversible cellular automata for wireless body area network applications

    PubMed Central

    Rafi Ahamed, Shaik

    2016-01-01

    In biomedical, data security is the most expensive resource for wireless body area network applications. Cryptographic algorithms are used in order to protect the information against unauthorised access. Advanced encryption standard (AES) cryptographic algorithm plays a vital role in telemedicine applications. The authors propose a novel approach for design of substitution bytes (S-Box) using second-order reversible one-dimensional cellular automata (RCA2) as a replacement to the classical look-up-table (LUT) based S-Box used in AES algorithm. The performance of proposed RCA2 based S-Box and conventional LUT based S-Box is evaluated in terms of security using the cryptographic properties such as the nonlinearity, correlation immunity bias, strict avalanche criteria and entropy. Moreover, it is also shown that RCA2 based S-Boxes are dynamic in nature, invertible and provide high level of security. Further, it is also found that the RCA2 based S-Box have comparatively better performance than that of conventional LUT based S-Box. PMID:27733924

  17. The Method for Assigning Priority Levels (MAPLe): A new decision-support system for allocating home care resources

    PubMed Central

    Hirdes, John P; Poss, Jeff W; Curtin-Telegdi, Nancy

    2008-01-01

    Background Home care plays a vital role in many health care systems, but there is evidence that appropriate targeting strategies must be used to allocate limited home care resources effectively. The aim of the present study was to develop and validate a methodology for prioritizing access to community and facility-based services for home care clients. Methods Canadian and international data based on the Resident Assessment Instrument – Home Care (RAI-HC) were analyzed to identify predictors for nursing home placement, caregiver distress and for being rated as requiring alternative placement to improve outlook. Results The Method for Assigning Priority Levels (MAPLe) algorithm was a strong predictor of all three outcomes in the derivation sample. The algorithm was validated with additional data from five other countries, three other provinces, and an Ontario sample obtained after the use of the RAI-HC was mandated. Conclusion The MAPLe algorithm provides a psychometrically sound decision-support tool that may be used to inform choices related to allocation of home care resources and prioritization of clients needing community or facility-based services. PMID:18366782

  18. Performance evaluation of various classifiers for color prediction of rice paddy plant leaf

    NASA Astrophysics Data System (ADS)

    Singh, Amandeep; Singh, Maninder Lal

    2016-11-01

    The food industry is one of the industries that uses machine vision for a nondestructive quality evaluation of the produce. These quality measuring systems and softwares are precalculated on the basis of various image-processing algorithms which generally use a particular type of classifier. These classifiers play a vital role in making the algorithms so intelligent that it can contribute its best while performing the said quality evaluations by translating the human perception into machine vision and hence machine learning. The crop of interest is rice, and the color of this crop indicates the health status of the plant. An enormous number of classifiers are available to solve the purpose of color prediction, but choosing the best among them is the focus of this paper. Performance of a total of 60 classifiers has been analyzed from the application point of view, and the results have been discussed. The motivation comes from the idea of providing a set of classifiers with excellent performance and implementing them on a single algorithm for the improvement of machine vision learning and, hence, associated applications.

  19. Remote sensing of wetland parameters related to carbon cycling

    NASA Technical Reports Server (NTRS)

    Bartlett, David S.; Johnson, Robert W.

    1985-01-01

    Measurement of the rates of important biogeochemical fluxes on regional or global scales is vital to understanding the geochemical and climatic consequences of natural biospheric processes and of human intervention in those processes. Remote data gathering and interpretation techniques were used to examine important cycling processes taking place in wetlands over large geographic expanses. Large area estimation of vegetative biomass and productivity depends upon accurate, consistent measurements of canopy spectral reflectance and upon wide applicability of algorithms relating reflectance to biometric parameters. Results of the use of airborne multispectral scanner data to map above-ground biomass in a Delaware salt marsh are shown. The mapping uses an effective algorithm linking biomass to measured spectral reflectance and a means to correct the scanner data for large variations in the angle of observation of the canopy. The consistency of radiometric biomass algorithms for marsh grass when they are applied over large latitudinal and tidal range gradients were also examined. Results of a 1 year study of methane emissions from tidal wetlands along a salinity gradient show marked effects of temperature, season, and pore-water chemistry in mediating flux to the atmosphere.

  20. A new range-free localisation in wireless sensor networks using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Zengfeng; Zhang, Hao; Lu, Tingting; Sun, Yujuan; Liu, Xing

    2018-02-01

    Location information of sensor nodes is of vital importance for most applications in wireless sensor networks (WSNs). This paper proposes a new range-free localisation algorithm using support vector machine (SVM) and polar coordinate system (PCS), LSVM-PCS. In LSVM-PCS, two sets of classes are first constructed based on sensor nodes' polar coordinates. Using the boundaries of the defined classes, the operation region of WSN field is partitioned into a finite number of polar grids. Each sensor node can be localised into one of the polar grids by executing two localisation algorithms that are developed on the basis of SVM classification. The centre of the resident polar grid is then estimated as the location of the sensor node. In addition, a two-hop mass-spring optimisation (THMSO) is also proposed to further improve the localisation accuracy of LSVM-PCS. In THMSO, both neighbourhood information and non-neighbourhood information are used to refine the sensor node location. The results obtained verify that the proposed algorithm provides a significant improvement over existing localisation methods.

  1. Using standard clinical assessments for home care to identify vulnerable populations before, during, and after disasters.

    PubMed

    van Solm, Alexandra I T; Hirdes, John P; Eckel, Leslie A; Heckman, George A; Bigelow, Philip L

    Several studies have shown the increased vulnerability of and disproportionate mortality rate among frail community-dwelling older adults as a result of emergencies and disasters. This article will discuss the applicability of the Vulnerable Persons at Risk (VPR) and VPR Plus decision support algorithms designed based on the Resident Assessment Instrument-Home Care (RAI-HC) to identify the most vulnerable community-dwelling (older) adults. A sample was taken from the Ontario RAI-HC database by selecting unique home care clients with assessments closest to December 31, 2014 (N = 275,797). Statistical methods used include cross tabulation, bivariate logistic regression as well as Kaplan-Meier survival plotting and Cox proportional hazards ratios calculations. The VPR and VPR Plus algorithms, were highly predictive of mortality, long-term care admission and hospitalization in ordinary circumstances. This provides a good indication of the strength of the algorithms in identifying vulnerable persons at times of emergencies. Access to real-time person-level information of persons with functional care needs is a vital enabler for emergency responders in prioritizing and allocating resources during a disaster, and has great utility for emergency planning and recovery efforts. The development of valid and reliable algorithms supports the rapid identification and response to vulnerable community-dwelling persons for all phases of emergency management.

  2. A portable respiratory rate estimation system with a passive single-lead electrocardiogram acquisition module.

    PubMed

    Nayan, Nazrul Anuar; Risman, Nur Sabrina; Jaafar, Rosmina

    2016-07-27

    Among vital signs of acutely ill hospital patients, respiratory rate (RR) is a highly accurate predictor of health deterioration. This study proposes a system that consists of a passive and non-invasive single-lead electrocardiogram (ECG) acquisition module and an ECG-derived respiratory (EDR) algorithm in the working prototype of a mobile application. Before estimating RR that produces the EDR rate, ECG signals were evaluated based on the signal quality index (SQI). The SQI algorithm was validated quantitatively using the PhysioNet/Computing in Cardiology Challenge 2011 training data set. The RR extraction algorithm was validated by adopting 40 MIT PhysioNet Multiparameter Intelligent Monitoring in Intensive Care II data set. The estimated RR showed a mean absolute error (MAE) of 1.4 compared with the ``gold standard'' RR. The proposed system was used to record 20 ECGs of healthy subjects and obtained the estimated RR with MAE of 0.7 bpm. Results indicate that the proposed hardware and algorithm could replace the manual counting method, uncomfortable nasal airflow sensor, chest band, and impedance pneumotachography often used in hospitals. The system also takes advantage of the prevalence of smartphone usage and increase the monitoring frequency of the current ECG of patients with critical illnesses.

  3. Evaluation of a methodology to validate National Death Index retrieval results among a cohort of U.S. service members.

    PubMed

    Skopp, Nancy A; Smolenski, Derek J; Schwesinger, Daniel A; Johnson, Christopher J; Metzger-Abamukong, Melinda J; Reger, Mark A

    2017-06-01

    Accurate knowledge of the vital status of individuals is critical to the validity of mortality research. National Death Index (NDI) and NDI-Plus are comprehensive epidemiological resources for mortality ascertainment and cause of death data that require additional user validation. Currently, there is a gap in methods to guide validation of NDI search results rendered for active duty service members. The purpose of this research was to adapt and evaluate the CDC National Program of Cancer Registries (NPCR) algorithm for mortality ascertainment in a large military cohort. We adapted and applied the NPCR algorithm to a cohort of 7088 service members on active duty at the time of death at some point between 2001 and 2009. We evaluated NDI validity and NDI-Plus diagnostic agreement against the Department of Defense's Armed Forces Medical Examiner System (AFMES). The overall sensitivity of the NDI to AFMES records after the application of the NPCR algorithm was 97.1%. Diagnostic estimates of measurement agreement between the NDI-Plus and the AFMES cause of death groups were high. The NDI and NDI-Plus can be successfully used with the NPCR algorithm to identify mortality and cause of death among active duty military cohort members who die in the United States. Published by Elsevier Inc.

  4. Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR

    NASA Astrophysics Data System (ADS)

    Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram

    2018-03-01

    Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).

  5. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  6. Algorithm for heart rate extraction in a novel wearable acoustic sensor

    PubMed Central

    Imtiaz, Syed Anas; Aguilar–Pelaez, Eduardo; Rodriguez–Villegas, Esther

    2015-01-01

    Phonocardiography is a widely used method of listening to the heart sounds and indicating the presence of cardiac abnormalities. Each heart cycle consists of two major sounds – S1 and S2 – that can be used to determine the heart rate. The conventional method of acoustic signal acquisition involves placing the sound sensor at the chest where this sound is most audible. Presented is a novel algorithm for the detection of S1 and S2 heart sounds and the use of them to extract the heart rate from signals acquired by a small sensor placed at the neck. This algorithm achieves an accuracy of 90.73 and 90.69%, with respect to heart rate value provided by two commercial devices, evaluated on more than 38 h of data acquired from ten different subjects during sleep in a pilot clinical study. This is the largest dataset for acoustic heart sound classification and heart rate extraction in the literature to date. The algorithm in this study used signals from a sensor designed to monitor breathing. This shows that the same sensor and signal can be used to monitor both breathing and heart rate, making it highly useful for long-term wearable vital signs monitoring. PMID:26609401

  7. Differences in spirometry interpretation algorithms: influence on decision making among primary-care physicians

    PubMed Central

    He, Xiao-Ou; D’Urzo, Anthony; Jugovic, Pieter; Jhirad, Reuven; Sehgal, Prateek; Lilly, Evan

    2015-01-01

    Background: Spirometry is recommended for the diagnosis of asthma and chronic obstructive pulmonary disease (COPD) in international guidelines and may be useful for distinguishing asthma from COPD. Numerous spirometry interpretation algorithms (SIAs) are described in the literature, but no studies highlight how different SIAs may influence the interpretation of the same spirometric data. Aims: We examined how two different SIAs may influence decision making among primary-care physicians. Methods: Data for this initiative were gathered from 113 primary-care physicians attending accredited workshops in Canada between 2011 and 2013. Physicians were asked to interpret nine spirograms presented twice in random sequence using two different SIAs and touch pad technology for anonymous data recording. Results: We observed differences in the interpretation of spirograms using two different SIAs. When the pre-bronchodilator FEV1/FVC (forced expiratory volume in one second/forced vital capacity) ratio was >0.70, algorithm 1 led to a ‘normal’ interpretation (78% of physicians), whereas algorithm 2 prompted a bronchodilator challenge revealing changes in FEV1 that were consistent with asthma, an interpretation selected by 94% of physicians. When the FEV1/FVC ratio was <0.70 after bronchodilator challenge but FEV1 increased >12% and 200 ml, 76% suspected asthma and 10% suspected COPD using algorithm 1, whereas 74% suspected asthma versus COPD using algorithm 2 across five separate cases. The absence of a post-bronchodilator FEV1/FVC decision node in algorithm 1 did not permit consideration of possible COPD. Conclusions: This study suggests that differences in SIAs may influence decision making and lead clinicians to interpret the same spirometry data differently. PMID:25763716

  8. Orthogonal vector algorithm to obtain the solar vector using the single-scattering Rayleigh model.

    PubMed

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Shi, Chao

    2018-02-01

    Information obtained from a polarization pattern in the sky provides many animals like insects and birds with vital long-distance navigation cues. The solar vector can be derived from the polarization pattern using the single-scattering Rayleigh model. In this paper, an orthogonal vector algorithm, which utilizes the redundancy of the single-scattering Rayleigh model, is proposed. We use the intersection angles between the polarization vectors as the main criteria in our algorithm. The assumption that all polarization vectors can be considered coplanar is used to simplify the three-dimensional (3D) problem with respect to the polarization vectors in our simulation. The surface-normal vector of the plane, which is determined by the polarization vectors after translation, represents the solar vector. Unfortunately, the two-directionality of the polarization vectors makes the resulting solar vector ambiguous. One important result of this study is, however, that this apparent disadvantage has no effect on the complexity of the algorithm. Furthermore, two other universal least-squares algorithms were investigated and compared. A device was then constructed, which consists of five polarized-light sensors as well as a 3D attitude sensor. Both the simulation and experimental data indicate that the orthogonal vector algorithms, if used with a suitable threshold, perform equally well or better than the other two algorithms. Our experimental data reveal that if the intersection angles between the polarization vectors are close to 90°, the solar-vector angle deviations are small. The data also support the assumption of coplanarity. During the 51 min experiment, the mean of the measured solar-vector angle deviations was about 0.242°, as predicted by our theoretical model.

  9. Applied Graph-Mining Algorithms to Study Biomolecular Interaction Networks

    PubMed Central

    2014-01-01

    Protein-protein interaction (PPI) networks carry vital information on the organization of molecular interactions in cellular systems. The identification of functionally relevant modules in PPI networks is one of the most important applications of biological network analysis. Computational analysis is becoming an indispensable tool to understand large-scale biomolecular interaction networks. Several types of computational methods have been developed and employed for the analysis of PPI networks. Of these computational methods, graph comparison and module detection are the two most commonly used strategies. This review summarizes current literature on graph kernel and graph alignment methods for graph comparison strategies, as well as module detection approaches including seed-and-extend, hierarchical clustering, optimization-based, probabilistic, and frequent subgraph methods. Herein, we provide a comprehensive review of the major algorithms employed under each theme, including our recently published frequent subgraph method, for detecting functional modules commonly shared across multiple cancer PPI networks. PMID:24800226

  10. Personalized recommendation based on preferential bidirectional mass diffusion

    NASA Astrophysics Data System (ADS)

    Chen, Guilin; Gao, Tianrun; Zhu, Xuzhen; Tian, Hui; Yang, Zhao

    2017-03-01

    Recommendation system provides a promising way to alleviate the dilemma of information overload. In physical dynamics, mass diffusion has been used to design effective recommendation algorithms on bipartite network. However, most of the previous studies focus overwhelmingly on unidirectional mass diffusion from collected objects to uncollected objects, while overlooking the opposite direction, leading to the risk of similarity estimation deviation and performance degradation. In addition, they are biased towards recommending popular objects which will not necessarily promote the accuracy but make the recommendation lack diversity and novelty that indeed contribute to the vitality of the system. To overcome the aforementioned disadvantages, we propose a preferential bidirectional mass diffusion (PBMD) algorithm by penalizing the weight of popular objects in bidirectional diffusion. Experiments are evaluated on three benchmark datasets (Movielens, Netflix and Amazon) by 10-fold cross validation, and results indicate that PBMD remarkably outperforms the mainstream methods in accuracy, diversity and novelty.

  11. Thruster-Specific Force Estimation and Trending of Cassini Hydrazine Thrusters at Saturn

    NASA Technical Reports Server (NTRS)

    Stupik, Joan; Burk, Thomas A.

    2016-01-01

    The Cassini spacecraft has been in orbit around Saturn since 2004 and has since been approved for both a first and second extended mission. As hardware reaches and exceeds its documented life expectancy, it becomes vital to closely monitor hardware performance. The performance of the 1-N hydrazine attitude control thrusters is especially important to study, because the spacecraft is currently operating on the back-up thruster branch. Early identification of hardware degradation allows more time to develop mitigation strategies. There is no direct measure of an individual thruster's thrust magnitude, but these values can be estimated by post-processing spacecraft telemetry. This paper develops an algorithm to calculate the individual thrust magnitudes using Euler's equation. The algorithm correctly shows the known degradation in the first thruster branch, validating the approach. Results for the current thruster branch show nominal performance as of August, 2015.

  12. A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories

    NASA Astrophysics Data System (ADS)

    Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon

    BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.

  13. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  14. Secured remote health monitoring system

    PubMed Central

    Ganesh Kumar, Pugalendhi

    2017-01-01

    Wireless medical sensor network is used in healthcare applications that have the collections of biosensors connected to a human body or emergency care unit to monitor the patient's physiological vital status. The real-time medical data collected using wearable medical sensors are transmitted to a diagnostic centre. The data generated from the sensors are aggregated at this centre and transmitted further to the doctor's personal digital assistant for diagnosis. The unauthorised access of one's health data may lead to misuse and legal complications while unreliable data transmission or storage may lead to life threatening risk to patients. So, this Letter combines the symmetric algorithm and attribute-based encryption to secure the data transmission and access control system for medical sensor network. In this work, existing systems and their algorithm are compared for identifying the best performance. The work also shows the graphical comparison of encryption time, decryption time and total computation time of the existing and the proposed systems. PMID:29383257

  15. Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data

    PubMed Central

    Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.

    2013-01-01

    Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946

  16. An Adaptive Sensor Mining Framework for Pervasive Computing Applications

    NASA Astrophysics Data System (ADS)

    Rashidi, Parisa; Cook, Diane J.

    Analyzing sensor data in pervasive computing applications brings unique challenges to the KDD community. The challenge is heightened when the underlying data source is dynamic and the patterns change. We introduce a new adaptive mining framework that detects patterns in sensor data, and more importantly, adapts to the changes in the underlying model. In our framework, the frequent and periodic patterns of data are first discovered by the Frequent and Periodic Pattern Miner (FPPM) algorithm; and then any changes in the discovered patterns over the lifetime of the system are discovered by the Pattern Adaptation Miner (PAM) algorithm, in order to adapt to the changing environment. This framework also captures vital context information present in pervasive computing applications, such as the startup triggers and temporal information. In this paper, we present a description of our mining framework and validate the approach using data collected in the CASAS smart home testbed.

  17. Basic management of medical emergencies: recognizing a patient's distress.

    PubMed

    Reed, Kenneth L

    2010-05-01

    Medical emergencies can happen in the dental office, possibly threatening a patient's life and hindering the delivery of dental care. Early recognition of medical emergencies begins at the first sign of symptoms. The basic algorithm for management of all medical emergencies is this: position (P), airway (A), breathing (B), circulation (C) and definitive treatment, differential diagnosis, drugs, defibrillation (D). The dentist places an unconscious patient in a supine position and comfortably positions a conscious patient. The dentist then assesses airway, breathing and circulation and, when necessary, supports the patient's vital functions. Drug therapy always is secondary to basic life support (that is, PABCD). Prompt recognition and efficient management of medical emergencies by a well-prepared dental team can increase the likelihood of a satisfactory outcome. The basic algorithm for managing medical emergencies is designed to ensure that the patient's brain receives a constant supply of blood containing oxygen.

  18. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    PubMed

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  19. Continuous non-contact vital sign monitoring in neonatal intensive care unit

    PubMed Central

    Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel

    2014-01-01

    Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal. PMID:26609384

  20. Machine learning and new vital signs monitoring in civilian en route care: A systematic review of the literature and future implications for the military.

    PubMed

    Liu, Nehemiah T; Salinas, Jose

    2016-11-01

    Although air transport medical services are today an integral part of trauma systems in most developed countries, to date, there are no reviews on recent innovations in civilian en route care. The purpose of this systematic review was to identify potential machine learning and new vital signs monitoring technologies in civilian en route care that could help close civilian and military capability gaps in monitoring and the early detection and treatment of various trauma injuries. MEDLINE, the Cochrane Database of Systematic Reviews, and citation review of relevant primary and review articles were searched for studies involving civilian en route care, air medical transport, and technologies from January 2005 to November 2015. Data were abstracted on study design, population, year, sponsors, innovation category, details of technologies, and outcomes. Thirteen observational studies involving civilian medical transport met inclusion criteria. Studies either focused on machine learning and software algorithms (n = 5), new vital signs monitoring (n = 6), or both (n = 2). Innovations involved continuous digital acquisition of physiologic data and parameter extraction. Importantly, all studies (n = 13) demonstrated improved outcomes where applicable and potential use during civilian and military en route care. However, almost all studies required further validation in prospective and/or randomized controlled trials. Potential machine learning technologies and monitoring of novel vital signs such as heart rate variability and complexity in civilian en route care could help enhance en route care for our nation's war fighters. In a complex global environment, they could potentially fill capability gaps such as monitoring and the early detection and treatment of various trauma injuries. However, the impact of these innovations and technologies will require further validation before widespread acceptance and prehospital use. Systematic review, level V.

  1. Continuous non-contact vital sign monitoring in neonatal intensive care unit.

    PubMed

    Villarroel, Mauricio; Guazzi, Alessandro; Jorge, João; Davis, Sara; Watkinson, Peter; Green, Gabrielle; Shenvi, Asha; McCormick, Kenny; Tarassenko, Lionel

    2014-09-01

    Current technologies to allow continuous monitoring of vital signs in pre-term infants in the hospital require adhesive electrodes or sensors to be in direct contact with the patient. These can cause stress, pain, and also damage the fragile skin of the infants. It has been established previously that the colour and volume changes in superficial blood vessels during the cardiac cycle can be measured using a digital video camera and ambient light, making it possible to obtain estimates of heart rate or breathing rate. Most of the papers in the literature on non-contact vital sign monitoring report results on adult healthy human volunteers in controlled environments for short periods of time. The authors' current clinical study involves the continuous monitoring of pre-term infants, for at least four consecutive days each, in the high-dependency care area of the Neonatal Intensive Care Unit (NICU) at the John Radcliffe Hospital in Oxford. The authors have further developed their video-based, non-contact monitoring methods to obtain continuous estimates of heart rate, respiratory rate and oxygen saturation for infants nursed in incubators. In this Letter, it is shown that continuous estimates of these three parameters can be computed with an accuracy which is clinically useful. During stable sections with minimal infant motion, the mean absolute error between the camera-derived estimates of heart rate and the reference value derived from the ECG is similar to the mean absolute error between the ECG-derived value and the heart rate value from a pulse oximeter. Continuous non-contact vital sign monitoring in the NICU using ambient light is feasible, and the authors have shown that clinically important events such as a bradycardia accompanied by a major desaturation can be identified with their algorithms for processing the video signal.

  2. Is respiration-induced variation in the photoplethysmogram associated with major hypovolemia in patients with acute traumatic injuries?

    PubMed

    Chen, Liangyou; Reisner, Andrew T; Gribok, Andrei; Reifman, Jaques

    2010-11-01

    It has been widely accepted that metrics related to respiration-induced waveform variation (RIWV) of the photoplethysmogram (PPG) have been associated with hypovolemia in mechanically ventilated patients and in controlled laboratory environments. In this retrospective study, we investigated if PPG RIWV metrics have diagnostic value for patients with acute hemorrhagic hypovolemia in the prehospital environment. Photoplethysmogram waveforms and basic vital signs were recorded in trauma patients during prehospital transport. Retrospectively, we used automated algorithms to select patient records with all five basic vital signs and 45 s or longer continuous, clean PPG segments. From these segments, we identified the onset and peak of individual heartbeats and computed waveform variations in the beats' peaks and amplitudes: (1) as the range between the maximum and the minimum (max-min) values and (2) as their interquartile range (IQR). We evaluated their receiver operating characteristic (ROC) curves for major hemorrhage. Separately, we tested whether RIWV metrics have potential independent information beyond basic vital signs by applying multivariate regression. In 344 patients, RIWV max-min yielded areas under the ROC curves (AUCs) not significantly better than a random AUC of 0.50. Respiration-induced waveform variation computed as IQR yielded ROC AUCs of 0.65 (95% confidence interval, 0.54-0.76) and of 0.64 (0.51-0.75), for peak and amplitude measures, respectively. The IQR metrics added independent information to basic vital signs (P < 0.05), but only moderately improved the overall AUC. Photoplethysmogram RIWV measured as IQR is preferable over max-min, and using PPG RIWV may enhance physiologic monitoring of spontaneously breathing patients outside strictly controlled laboratory environments.

  3. A compositional origin to ultralow-velocity zones

    NASA Astrophysics Data System (ADS)

    Brown, Samuel P.; Thorne, Michael S.; Miyagi, Lowell; Rost, Sebastian

    2015-02-01

    We analyzed vertical component short-period ScP waveforms for 26 earthquakes occurring in the Tonga-Fiji trench recorded at the Alice Springs Array in central Australia. These waveforms show strong precursory and postcursory seismic arrivals consistent with ultralow-velocity zone (ULVZ) layering beneath the Coral Sea. We used the Viterbi sparse spike detection method to measure differential travel times and amplitudes of the postcursor arrival ScSP and the precursor arrival SPcP relative to ScP. We compare our measurements to a database of 340,000 synthetic seismograms finding that these data are best fit by a ULVZ model with an S wave velocity reduction of 24%, a P wave velocity reduction of 23%, a thickness of 8.5 km, and a density increase of 6%. This 1:1 VS:VP velocity decrease is commensurate with a ULVZ compositional origin and is most consistent with highly iron enriched ferropericlase.

  4. Mission science value-cost savings from the Advanced Imaging Communication System (AICS)

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1984-01-01

    An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.

  5. Performance of Trellis Coded 256 QAM super-multicarrier modem VLSI's for SDH interface outage-free digital microwave radio

    NASA Astrophysics Data System (ADS)

    Aikawa, Satoru; Nakamura, Yasuhisa; Takanashi, Hitoshi

    1994-02-01

    This paper describes the performance of an outage free SXH (Synchronous Digital Hierarchy) interface 256 QAM modem. An outage free DMR (Digital Microwave Radio) is achieved by a high coding gain trellis coded SPORT QAM and Super Multicarrier modem. A new frame format and its associated circuits connect the outage free modem to the SDH interface. The newly designed VLSI's are key devices for developing the modem. As an overall modem performance, BER (bit error rate) characteristics and equipment signatures are presented. A coding gain of 4.7 dB (at a BER of 10(exp -4)) is obtained using SPORT 256 QAM and Viterbi decoding. This coding gain is realized by trellis coding as well as by increasing of transmission rate. Roll-off factor is decreased to maintain the same frequency occupation and modulation level as ordinary SDH 256 QAM modern.

  6. Study of a Tracking and Data Acquisition System (TDAS) in the 1990's

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Progress in concept definition studies, operational assessments, and technology demonstrations for the Tracking and Data Acquisition System (TDAS) is reported. The proposed TDAS will be the follow-on to the Tracking and Data Relay Satellite System and will function as a key element of the NASA End-to-End Data System, providing the tracking and data acquisition interface between user accessible data ports on Earth and the user's spaceborne equipment. Technical activities of the "spacecraft data system architecture' task and the "communication mission model' task are emphasized. The objective of the first task is to provide technology forecasts for sensor data handling, navigation and communication systems, and estimate corresponding costs. The second task is concerned with developing a parametric description of the required communication channels. Other tasks with significant activity include the "frequency plan and radio interference model' and the "Viterbi decoder/simulator study'.

  7. Error-correction coding for digital communications

    NASA Astrophysics Data System (ADS)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  8. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    PubMed Central

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  9. The Development of an Automated Device for Asthma Monitoring for Adolescents: Methodologic Approach and User Acceptability

    PubMed Central

    Miner, Sarah; Sterling, Mark; Halterman, Jill S; Fairbanks, Eileen

    2014-01-01

    Background Many adolescents suffer serious asthma related morbidity that can be prevented by adequate self-management of the disease. The accurate symptom monitoring by patients is the most fundamental antecedent to effective asthma management. Nonetheless, the adequacy and effectiveness of current methods of symptom self-monitoring have been challenged due to the individuals’ fallible symptom perception, poor adherence, and inadequate technique. Recognition of these limitations led to the development of an innovative device that can facilitate continuous and accurate monitoring of asthma symptoms with minimal disruption of daily routines, thus increasing acceptability to adolescents. Objective The objectives of this study were to: (1) describe the development of a novel symptom monitoring device for teenagers (teens), and (2) assess their perspectives on the usability and acceptability of the device. Methods Adolescents (13-17 years old) with and without asthma participated in the evolution of an automated device for asthma monitoring (ADAM), which comprised three phases, including development (Phase 1, n=37), validation/user acceptability (Phase 2, n=84), and post hoc validation (Phase 3, n=10). In Phase 1, symptom algorithms were identified based on the acoustic analysis of raw symptom sounds and programmed into a popular mobile system, the iPod. Phase 2 involved a 7 day trial of ADAM in vivo, and the evaluation of user acceptance using an acceptance survey and individual interviews. ADAM was further modified and enhanced in Phase 3. Results Through ADAM, incoming audio data were digitized and processed in two steps involving the extraction of a sequence of descriptive feature vectors, and the processing of these sequences by a hidden Markov model-based Viterbi decoder to differentiate symptom sounds from background noise. The number and times of detected symptoms were stored and displayed in the device. The sensitivity (true positive) of the updated cough algorithm was 70% (21/30), and, on average, 2 coughs per hour were identified as false positive. ADAM also kept track of the their activity level throughout the day using the mobile system’s built in accelerometer function. Overall, the device was well received by participants who perceived it as attractive, convenient, and helpful. The participants recognized the potential benefits of the device in asthma care, and were eager to use it for their asthma management. Conclusions ADAM can potentially automate daily symptom monitoring with minimal intrusiveness and maximal objectivity. The users’ acceptance of the device based on its recognized convenience, user-friendliness, and usefulness in increasing symptom awareness underscores ADAM’s potential to overcome the issues of symptom monitoring including poor adherence, inadequate technique, and poor symptom perception in adolescents. Further refinement of the algorithm is warranted to improve the accuracy of the device. Future study is also needed to assess the efficacy of the device in promoting self-management and asthma outcomes. PMID:25100184

  10. Performance of the 2015 International Task Force Consensus Statement Risk Stratification Algorithm for Implantable Cardioverter-Defibrillator Placement in Arrhythmogenic Right Ventricular Dysplasia/Cardiomyopathy.

    PubMed

    Orgeron, Gabriela M; Te Riele, Anneline; Tichnell, Crystal; Wang, Weijia; Murray, Brittney; Bhonsale, Aditya; Judge, Daniel P; Kamel, Ihab R; Zimmerman, Stephan L; Tandri, Harikrishna; Calkins, Hugh; James, Cynthia A

    2018-02-01

    Ventricular arrhythmias are a feared complication of arrhythmogenic right ventricular dysplasia/cardiomyopathy. In 2015, an International Task Force Consensus Statement proposed a risk stratification algorithm for implantable cardioverter-defibrillator placement in arrhythmogenic right ventricular dysplasia/cardiomyopathy. To evaluate performance of the algorithm, 365 arrhythmogenic right ventricular dysplasia/cardiomyopathy patients were classified as having a Class I, IIa, IIb, or III indication per the algorithm at baseline. Survival free from sustained ventricular arrhythmia (VT/VF) in follow-up was the primary outcome. Incidence of ventricular fibrillation/flutter cycle length <240 ms was also assessed. Two hundred twenty-four (61%) patients had a Class I implantable cardioverter-defibrillator indication; 80 (22%), Class IIa; 54 (15%), Class IIb; and 7 (2%), Class III. During a median 4.2 (interquartile range, 1.7-8.4)-year follow-up, 190 (52%) patients had VT/VF and 60 (16%) had ventricular fibrillation/flutter. Although the algorithm appropriately differentiated risk of VT/VF, incidence of VT/VF was underestimated (observed versus expected: 29.6 [95% confidence interval, 25.2-34.0] versus >10%/year Class I; 15.5 [confidence interval 11.1-21.6] versus 1% to 10%/year Class IIa). In addition, the algorithm did not differentiate survival free from ventricular fibrillation/flutter between Class I and IIa patients ( P =0.97) or for VT/VF in Class I and IIa primary prevention patients ( P =0.22). Adding Holter results (<1000 premature ventricular contractions/24 hours) to International Task Force Consensus classification differentiated risks. While the algorithm differentiates arrhythmic risk well overall, it did not distinguish ventricular fibrillation/flutter risks of patients with Class I and IIa implantable cardioverter-defibrillator indications. Limited differentiation was seen for primary prevention cases. As these are vital uncertainties in clinical decision-making, refinements to the algorithm are suggested prior to implementation. © 2018 American Heart Association, Inc.

  11. Hessian-LoG filtering for enhancement and detection of photoreceptor cells in adaptive optics retinal images.

    PubMed

    Lazareva, Anfisa; Liatsis, Panos; Rauscher, Franziska G

    2016-01-01

    Automated analysis of retinal images plays a vital role in the examination, diagnosis, and prognosis of healthy and pathological retinas. Retinal disorders and the associated visual loss can be interpreted via quantitative correlations, based on measurements of photoreceptor loss. Therefore, it is important to develop reliable tools for identification of photoreceptor cells. In this paper, an automated algorithm is proposed, based on the use of the Hessian-Laplacian of Gaussian filter, which allows enhancement and detection of photoreceptor cells. The performance of the proposed technique is evaluated on both synthetic and high-resolution retinal images, in terms of packing density. The results on the synthetic data were compared against ground truth as well as cone counts obtained by the Li and Roorda algorithm. For the synthetic datasets, our method showed an average detection accuracy of 98.8%, compared to 93.9% for the Li and Roorda approach. The packing density estimates calculated on the retinal datasets were validated against manual counts and the results obtained by a proprietary software from Imagine Eyes and the Li and Roorda algorithm. Among the tested methods, the proposed approach showed the closest agreement with manual counting.

  12. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  13. Blood glucose level prediction based on support vector regression using mobile platforms.

    PubMed

    Reymann, Maximilian P; Dorschky, Eva; Groh, Benjamin H; Martindale, Christine; Blank, Peter; Eskofier, Bjoern M

    2016-08-01

    The correct treatment of diabetes is vital to a patient's health: Staying within defined blood glucose levels prevents dangerous short- and long-term effects on the body. Mobile devices informing patients about their future blood glucose levels could enable them to take counter-measures to prevent hypo or hyper periods. Previous work addressed this challenge by predicting the blood glucose levels using regression models. However, these approaches required a physiological model, representing the human body's response to insulin and glucose intake, or are not directly applicable to mobile platforms (smart phones, tablets). In this paper, we propose an algorithm for mobile platforms to predict blood glucose levels without the need for a physiological model. Using an online software simulator program, we trained a Support Vector Regression (SVR) model and exported the parameter settings to our mobile platform. The prediction accuracy of our mobile platform was evaluated with pre-recorded data of a type 1 diabetes patient. The blood glucose level was predicted with an error of 19 % compared to the true value. Considering the permitted error of commercially used devices of 15 %, our algorithm is the basis for further development of mobile prediction algorithms.

  14. A novel algorithm for thermal image encryption.

    PubMed

    Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen

    2018-04-16

    Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.

  15. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  16. Determination of the actual evapotranspiration by using remote sensing methods

    NASA Astrophysics Data System (ADS)

    Bora, Eser

    2017-10-01

    Evapotranspiration is so crucial for determining amount of the irrigation and the effective water management planning. Moreover, it is vital for determining agricultural drought management and determination the actual evapotranspiration ın a region is critical for early drought warning systems. The main object of this study was to assess accuracy of the remote sensing method (METRIC) by calibrating with the bowen ratio observations at the same time. The research was carried out in the west of Marmara Region, Turkey. Landsat 5 images was used to determine the metric algorithm. By using this algorithms are found. Landsat 5 images file were used to determine actual evapotranspiration and the image's date was June 11 in 2010. This date was used for calibration with available terrestrial observation by using bowen ratio in that time. Landsat images obtained from the web site, earthexplorer.usgs.gov, and results of bowen ratio taken from micrometeorology station. As a result, energy balance parameters that are net radiation, soil heat flux and latent heat flux were compared both metric algorithm and the bowen ration in the images time. The results are found so close to each other.

  17. Optimal Parameter Exploration for Online Change-Point Detection in Activity Monitoring Using Genetic Algorithms

    PubMed Central

    Khan, Naveed; McClean, Sally; Zhang, Shuai; Nugent, Chris

    2016-01-01

    In recent years, smart phones with inbuilt sensors have become popular devices to facilitate activity recognition. The sensors capture a large amount of data, containing meaningful events, in a short period of time. The change points in this data are used to specify transitions to distinct events and can be used in various scenarios such as identifying change in a patient’s vital signs in the medical domain or requesting activity labels for generating real-world labeled activity datasets. Our work focuses on change-point detection to identify a transition from one activity to another. Within this paper, we extend our previous work on multivariate exponentially weighted moving average (MEWMA) algorithm by using a genetic algorithm (GA) to identify the optimal set of parameters for online change-point detection. The proposed technique finds the maximum accuracy and F_measure by optimizing the different parameters of the MEWMA, which subsequently identifies the exact location of the change point from an existing activity to a new one. Optimal parameter selection facilitates an algorithm to detect accurate change points and minimize false alarms. Results have been evaluated based on two real datasets of accelerometer data collected from a set of different activities from two users, with a high degree of accuracy from 99.4% to 99.8% and F_measure of up to 66.7%. PMID:27792177

  18. Convergence of the Ponderomotive Guiding Center approximation in the LWFA

    NASA Astrophysics Data System (ADS)

    Silva, Thales; Vieira, Jorge; Helm, Anton; Fonseca, Ricardo; Silva, Luis

    2017-10-01

    Plasma accelerators arose as potential candidates for future accelerator technology in the last few decades because of its predicted compactness and low cost. One of the proposed designs for plasma accelerators is based on Laser Wakefield Acceleration (LWFA). However, simulations performed for such systems have to solve the laser wavelength which is orders of magnitude lower than the plasma wavelength. In this context, the Ponderomotive Guiding Center (PGC) algorithm for particle-in-cell (PIC) simulations is a potent tool. The laser is approximated by its envelope which leads to a speed-up of around 100 times because the laser wavelength is not solved. The plasma response is well understood, and comparison with the full PIC code show an excellent agreement. However, for LWFA, the convergence of the self-injected beam parameters, such as energy and charge, was not studied before and has vital importance for the use of the algorithm in predicting the beam parameters. Our goal is to do a thorough investigation of the stability and convergence of the algorithm in situations of experimental relevance for LWFA. To this end, we perform simulations using the PGC algorithm implemented in the PIC code OSIRIS. To verify the PGC predictions, we compare the results with full PIC simulations. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No 653782.

  19. The development of a novel knowledge-based weaning algorithm using pulmonary parameters: a simulation study.

    PubMed

    Guler, Hasan; Kilic, Ugur

    2018-03-01

    Weaning is important for patients and clinicians who have to determine correct weaning time so that patients do not become addicted to the ventilator. There are already some predictors developed, such as the rapid shallow breathing index (RSBI), the pressure time index (PTI), and Jabour weaning index. Many important dimensions of weaning are sometimes ignored by these predictors. This is an attempt to develop a knowledge-based weaning process via fuzzy logic that eliminates the disadvantages of the present predictors. Sixteen vital parameters listed in published literature have been used to determine the weaning decisions in the developed system. Since there are considered to be too many individual parameters in it, related parameters were grouped together to determine acid-base balance, adequate oxygenation, adequate pulmonary function, hemodynamic stability, and the psychological status of the patients. To test the performance of the developed algorithm, 20 clinical scenarios were generated using Monte Carlo simulations and the Gaussian distribution method. The developed knowledge-based algorithm and RSBI predictor were applied to the generated scenarios. Finally, a clinician evaluated each clinical scenario independently. The Student's t test was used to show the statistical differences between the developed weaning algorithm, RSBI, and the clinician's evaluation. According to the results obtained, there were no statistical differences between the proposed methods and the clinician evaluations.

  20. An Algorithm to Identify Compounded Non-Sterile Products that Can Be Formulated on a Commercial Scale or Imported to Promote Safer Medication Use in Children

    PubMed Central

    Bhatt-Mehta, Varsha; MacArthur, Robert B.; Löbenberg, Raimar; Cies, Jeffrey J.; Cernak, Ibolja; Parrish, Richard H.

    2015-01-01

    The lack of commercially-available pediatric drug products and dosage forms is well-known. A group of clinicians and scientists with a common interest in pediatric drug development and medicines-use systems developed a practical framework for identifying a list of active pharmaceutical ingredients (APIs) with the greatest market potential for development to use in pediatric patients. Reliable and reproducible evidence-based drug formulations designed for use in pediatric patients are needed vitally, otherwise safe and consistent clinical practices and outcomes assessments will continue to be difficult to ascertain. Identification of a prioritized list of candidate APIs for oral formulation using the described algorithm provides a broader integrated clinical, scientific, regulatory, and market basis to allow for more reliable dosage forms and safer, effective medicines use in children of all ages. Group members derived a list of candidate API molecules by factoring in a number of pharmacotherapeutic, scientific, manufacturing, and regulatory variables into the selection algorithm that were absent in other rubrics. These additions will assist in identifying and categorizing prime API candidates suitable for oral formulation development. Moreover, the developed algorithm aids in prioritizing useful APIs with finished oral liquid dosage forms available from other countries with direct importation opportunities to North America and beyond. PMID:28975916

  1. Automatic identification and location technology of glass insulator self-shattering

    NASA Astrophysics Data System (ADS)

    Huang, Xinbo; Zhang, Huiying; Zhang, Ye

    2017-11-01

    The insulator of transmission lines is one of the most important infrastructures, which is vital to ensure the safe operation of transmission lines under complex and harsh operating conditions. The glass insulator often self-shatters but the available identification methods are inefficient and unreliable. Then, an automatic identification and localization technology of self-shattered glass insulators is proposed, which consists of the cameras installed on the tower video monitoring devices or the unmanned aerial vehicles, the 4G/OPGW network, and the monitoring center, where the identification and localization algorithm is embedded into the expert software. First, the images of insulators are captured by cameras, which are processed to identify the region of insulator string by the presented identification algorithm of insulator string. Second, according to the characteristics of the insulator string image, a mathematical model of the insulator string is established to estimate the direction and the length of the sliding blocks. Third, local binary pattern histograms of the template and the sliding block are extracted, by which the self-shattered insulator can be recognized and located. Finally, a series of experiments is fulfilled to verify the effectiveness of the algorithm. For single insulator images, Ac, Pr, and Rc of the algorithm are 94.5%, 92.38%, and 96.78%, respectively. For double insulator images, Ac, Pr, and Rc are 90.00%, 86.36%, and 93.23%, respectively.

  2. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.

    PubMed

    Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal

    2010-11-15

    Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.

  3. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    USGS Publications Warehouse

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.

  4. Speckle-reduction algorithm for ultrasound images in complex wavelet domain using genetic algorithm-based mixture model.

    PubMed

    Uddin, Muhammad Shahin; Tahtali, Murat; Lambert, Andrew J; Pickering, Mark R; Marchese, Margaret; Stuart, Iain

    2016-05-20

    Compared with other medical-imaging modalities, ultrasound (US) imaging is a valuable way to examine the body's internal organs, and two-dimensional (2D) imaging is currently the most common technique used in clinical diagnoses. Conventional 2D US imaging systems are highly flexible cost-effective imaging tools that permit operators to observe and record images of a large variety of thin anatomical sections in real time. Recently, 3D US imaging has also been gaining popularity due to its considerable advantages over 2D US imaging. It reduces dependency on the operator and provides better qualitative and quantitative information for an effective diagnosis. Furthermore, it provides a 3D view, which allows the observation of volume information. The major shortcoming of any type of US imaging is the presence of speckle noise. Hence, speckle reduction is vital in providing a better clinical diagnosis. The key objective of any speckle-reduction algorithm is to attain a speckle-free image while preserving the important anatomical features. In this paper we introduce a nonlinear multi-scale complex wavelet-diffusion based algorithm for speckle reduction and sharp-edge preservation of 2D and 3D US images. In the proposed method we use a Rayleigh and Maxwell-mixture model for 2D and 3D US images, respectively, where a genetic algorithm is used in combination with an expectation maximization method to estimate mixture parameters. Experimental results using both 2D and 3D synthetic, physical phantom, and clinical data demonstrate that our proposed algorithm significantly reduces speckle noise while preserving sharp edges without discernible distortions. The proposed approach performs better than the state-of-the-art approaches in both qualitative and quantitative measures.

  5. Assessment of economic status in trauma registries: A new algorithm for generating population-specific clustering-based models of economic status for time-constrained low-resource settings.

    PubMed

    Eyler, Lauren; Hubbard, Alan; Juillard, Catherine

    2016-10-01

    Low and middle-income countries (LMICs) and the world's poor bear a disproportionate share of the global burden of injury. Data regarding disparities in injury are vital to inform injury prevention and trauma systems strengthening interventions targeted towards vulnerable populations, but are limited in LMICs. We aim to facilitate injury disparities research by generating a standardized methodology for assessing economic status in resource-limited country trauma registries where complex metrics such as income, expenditures, and wealth index are infeasible to assess. To address this need, we developed a cluster analysis-based algorithm for generating simple population-specific metrics of economic status using nationally representative Demographic and Health Surveys (DHS) household assets data. For a limited number of variables, g, our algorithm performs weighted k-medoids clustering of the population using all combinations of g asset variables and selects the combination of variables and number of clusters that maximize average silhouette width (ASW). In simulated datasets containing both randomly distributed variables and "true" population clusters defined by correlated categorical variables, the algorithm selected the correct variable combination and appropriate cluster numbers unless variable correlation was very weak. When used with 2011 Cameroonian DHS data, our algorithm identified twenty economic clusters with ASW 0.80, indicating well-defined population clusters. This economic model for assessing health disparities will be used in the new Cameroonian six-hospital centralized trauma registry. By describing our standardized methodology and algorithm for generating economic clustering models, we aim to facilitate measurement of health disparities in other trauma registries in resource-limited countries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Modified Parameters of Harmony Search Algorithm for Better Searching

    NASA Astrophysics Data System (ADS)

    Farraliza Mansor, Nur; Abal Abas, Zuraida; Samad Shibghatullah, Abdul; Rahman, Ahmad Fadzli Nizam Abdul

    2017-08-01

    The scheduling and rostering problems are deliberated as integrated due to they depend on each other whereby the input of rostering problems is a scheduling problems. In this research, the integrated scheduling and rostering bus driver problems are defined as maximising the balance of the assignment of tasks in term of distribution of shifts and routes. It is essential to achieve is fairer among driver because this can bring to increase in driver levels of satisfaction. The latest approaches still unable to address the fairness problem that has emerged, thus this research proposes a strategy to adopt an amendment of a harmony search algorithm in order to address the fairness issue and thus the level of fairness will be escalate. The harmony search algorithm is classified as a meta-heuristics algorithm that is capable of solving hard and combinatorial or discrete optimisation problems. In this respect, the three main operators in HS, namely the Harmony Memory Consideration Rate (HMCR), Pitch Adjustment Rate (PAR) and Bandwidth (BW) play a vital role in balancing local exploitation and global exploration. These parameters influence the overall performance of the HS algorithm, and therefore it is crucial to fine-tune them. The contributions to this research are the HMCR parameter using step function while the fret spacing concept on guitars that is associated with mathematical formulae is also applied in the BW parameter. The model of constant step function is introduced in the alteration of HMCR parameter. The experimental results revealed that our proposed approach is superior than parameter adaptive harmony search algorithm. In conclusion, this proposed approach managed to generate a fairer roster and was thus capable of maximising the balancing distribution of shifts and routes among drivers, which contributed to the lowering of illness, incidents, absenteeism and accidents.

  7. Key features for ATA / ATR database design in missile systems

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2017-05-01

    Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.

  8. Computer-assisted diagnosis of melanoma.

    PubMed

    Fuller, Collin; Cellura, A Paul; Hibler, Brian P; Burris, Katy

    2016-03-01

    The computer-assisted diagnosis of melanoma is an exciting area of research where imaging techniques are combined with diagnostic algorithms in an attempt to improve detection and outcomes for patients with skin lesions suspicious for malignancy. Once an image has been acquired, it undergoes a processing pathway which includes preprocessing, enhancement, segmentation, feature extraction, feature selection, change detection, and ultimately classification. Practicality for everyday clinical use remains a vital question. A successful model must obtain results that are on par or outperform experienced dermatologists, keep costs at a minimum, be user-friendly, and be time efficient with high sensitivity and specificity. ©2015 Frontline Medical Communications.

  9. Description of bioremediation of soils using the model of a multistep system of microorganisms

    NASA Astrophysics Data System (ADS)

    Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.

    2018-01-01

    The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.

  10. Applied Mathematical Optimization Technique on Menu Scheduling for Boarding School Student Using Delete-Reshuffle-Reoptimize Algorithm

    NASA Astrophysics Data System (ADS)

    Sufahani, Suliadi; Mohamad, Mahathir; Roslan, Rozaini; Ghazali Kamardan, M.; Che-Him, Norziha; Ali, Maselan; Khalid, Kamal; Nazri, E. M.; Ahmad, Asmala

    2018-04-01

    Boarding school student needs to eat well balanced nutritious food which includes proper calories, vitality and supplements for legitimate development, keeping in mind the end goal is to repair and support the body tissues and averting undesired ailments and disease. Serving healthier menu is a noteworthy stride towards accomplishing that goal. Be that as it may, arranging a nutritious and adjusted menu physically is confounded, wasteful and tedious. This study intends to build up a scientific mathematical model for eating routine arranging that improves and meets the vital supplement consumption for boarding school student aged 13-18 and in addition saving the financial plan. It likewise gives the adaptability for the cook to change any favoured menu even after the ideal arrangement has been produced. A recalculation procedure will be performed in view of the ideal arrangement. The information was gathered from the the Ministry of Education and boarding schools’ authorities. Menu arranging is a notable enhancement issue and part of well-established optimization problem. The model was fathomed by utilizing Binary Programming and “Delete-Reshuffle-Reoptimize Algortihm (DDRA)”.

  11. Evaluating How Post-Bronchodilator Vital Capacities Affect the Diagnosis of Obstruction in Pulmonary Function Tests.

    PubMed

    Blagev, Denitza P; Sorenson, Dean; Linares-Perdomo, Olinto; Bamberg, Stacy; Hegewald, Matthew; Morris, Alan H

    2016-11-01

    Although the ratio of FEV 1 to the vital capacity (VC) is universally accepted as the cornerstone of pulmonary function test (PFT) interpretation, FVC remains in common use. We sought to determine what the differences in PFT interpretation were when the largest measured vital capacity (VC max ) was used instead of the FVC. We included 12,238 consecutive PFTs obtained for routine clinical care. We interpreted all PFTs first using FVC in the interpretation algorithm and then again using the VC max , obtained either before or after administration of inhaled bronchodilator. Six percent of PFTs had an interpretive change when VC max was used instead of FVC. The most common changes were: new diagnosis of obstruction and exclusion of restriction (previously suggested by low FVC without total lung capacity measured by body plethysmography). A nonspecific pattern occurred in 3% of all PFT interpretations with FVC. One fifth of these 3% produced a new diagnosis of obstruction with VC max . The largest factors predicting a change in PFT interpretation with VC max were a positive bronchodilator response and the administration of a bronchodilator. Larger FVCs decreased the odds of PFT interpretation change. Surprisingly, the increased numbers of PFT tests did not increase odds of PFT interpretation change. Six percent of PFTs have a different interpretation when VC max is used instead of FVC. Evaluating borderline or ambiguous PFTs using the VC max may be informative in diagnosing obstruction and excluding restriction. Copyright © 2016 by Daedalus Enterprises.

  12. Automated choroid segmentation based on gradual intensity distance in HD-OCT images.

    PubMed

    Chen, Qiang; Fan, Wen; Niu, Sijie; Shi, Jiajia; Shen, Honglie; Yuan, Songtao

    2015-04-06

    The choroid is an important structure of the eye and plays a vital role in the pathology of retinal diseases. This paper presents an automated choroid segmentation method for high-definition optical coherence tomography (HD-OCT) images, including Bruch's membrane (BM) segmentation and choroidal-scleral interface (CSI) segmentation. An improved retinal nerve fiber layer (RNFL) complex removal algorithm is presented to segment BM by considering the structure characteristics of retinal layers. By analyzing the characteristics of CSI boundaries, we present a novel algorithm to generate a gradual intensity distance image. Then an improved 2-D graph search method with curve smooth constraints is used to obtain the CSI segmentation. Experimental results with 212 HD-OCT images from 110 eyes in 66 patients demonstrate that the proposed method can achieve high segmentation accuracy. The mean choroid thickness difference and overlap ratio between our proposed method and outlines drawn by experts was 6.72µm and 85.04%, respectively.

  13. Estimating Traffic Accidents in Turkey Using Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Akgüngör, Ali Payıdar; Korkmaz, Ersin

    2017-06-01

    Estimating traffic accidents play a vital role to apply road safety procedures. This study proposes Differential Evolution Algorithm (DEA) models to estimate the number of accidents in Turkey. In the model development, population (P) and the number of vehicles (N) are selected as model parameters. Three model forms, linear, exponential and semi-quadratic models, are developed using DEA with the data covering from 2000 to 2014. Developed models are statistically compared to select the best fit model. The results of the DE models show that the linear model form is suitable to estimate the number of accidents. The statistics of this form is better than other forms in terms of performance criteria which are the Mean Absolute Percentage Errors (MAPE) and the Root Mean Square Errors (RMSE). To investigate the performance of linear DE model for future estimations, a ten-year period from 2015 to 2024 is considered. The results obtained from future estimations reveal the suitability of DE method for road safety applications.

  14. Statistical approach for the detection of motion/noise artifacts in Photoplethysmogram.

    PubMed

    Selvaraj, Nandakumar; Mendelson, Yitzhak; Shelley, Kirk H; Silverman, David G; Chon, Ki H

    2011-01-01

    Motion and noise artifacts (MNA) have been a serious obstacle in realizing the potential of Photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a statistical approach based on the computation of kurtosis and Shannon Entropy (SE) for the accurate detection of MNA in PPG data. The MNA detection algorithm was verified on multi-site PPG data collected from both laboratory and clinical settings. The accuracy of the fusion of kurtosis and SE metrics for the artifact detection was 99.0%, 94.8% and 93.3% in simultaneously recorded ear, finger and forehead PPGs obtained in a clinical setting, respectively. For laboratory PPG data recorded from a finger with contrived artifacts, the accuracy was 88.8%. It was identified that the measurements from the forehead PPG sensor contained the most artifacts followed by finger and ear. The proposed MNA algorithm can be implemented in real-time as the computation time was 0.14 seconds using Matlab®.

  15. Design of electrocardiography measurement system with an algorithm to remove noise

    NASA Astrophysics Data System (ADS)

    Kwon, Hyeokjun; Oh, Sechang; Kumar, Prashanth; Varadan, Vijay K.

    2011-04-01

    Electrocardiography (ECG) is an important diagnostic tool that can provide vital information about diseases that may not be detectable with other biological signals like, SpO2(Oxygen Saturation), pulse rate, respiration, and blood pressure. For this reason, EKG measurement is mandatory for accurate diagnosis. Recent development in information technology has facilitated remote monitoring systems which can check patient's current status. Moreover, remote monitoring systems can obviate the need for patients to go to hospitals periodically. Such representative wireless communication system is Zigbee sensor network because Zigbee sensor network provides low power consumption and multi-device connection. When we measure EKG signal, another important factor that we should consider is about unexpected signals mixed to EKG signal. The unexpected signals give a severe impact in distorting original EKG signal. There are three kinds of types in noise elements such as muscle noise, movement noise, and respiration noise. This paper describes the design method for EKG measurement system with Zigbee sensor network and proposes an algorithm to remove noises from measured ECG signal.

  16. Optimal feature selection using a modified differential evolution algorithm and its effectiveness for prediction of heart disease.

    PubMed

    Vivekanandan, T; Sriman Narayana Iyengar, N Ch

    2017-11-01

    Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A scoring algorithm for predicting the presence of adult asthma: a prospective derivation study.

    PubMed

    Tomita, Katsuyuki; Sano, Hiroyuki; Chiba, Yasutaka; Sato, Ryuji; Sano, Akiko; Nishiyama, Osamu; Iwanaga, Takashi; Higashimoto, Yuji; Haraguchi, Ryuta; Tohda, Yuji

    2013-03-01

    To predict the presence of asthma in adult patients with respiratory symptoms, we developed a scoring algorithm using clinical parameters. We prospectively analysed 566 adult outpatients who visited Kinki University Hospital for the first time with complaints of nonspecific respiratory symptoms. Asthma was comprehensively diagnosed by specialists using symptoms, signs, and objective tools including bronchodilator reversibility and/or the assessment of bronchial hyperresponsiveness (BHR). Multiple logistic regression analysis was performed to categorise patients and determine the accuracy of diagnosing asthma. A scoring algorithm using the symptom-sign score was developed, based on diurnal variation of symptoms (1 point), recurrent episodes (2 points), medical history of allergic diseases (1 point), and wheeze sound (2 points). A score of >3 had 35% sensitivity and 97% specificity for discriminating between patients with and without asthma and assigned a high probability of having asthma (accuracy 90%). A score of 1 or 2 points assigned intermediate probability (accuracy 68%). After providing additional data of forced expiratory volume in 1 second/forced vital capacity (FEV(1)/FVC) ratio <0.7, the post-test probability of having asthma was increased to 93%. A score of 0 points assigned low probability (accuracy 31%). After providing additional data of positive reversibility, the post-test probability of having asthma was increased to 88%. This pragmatic diagnostic algorithm is useful for predicting the presence of adult asthma and for determining the appropriate time for consultation with a pulmonologist.

  18. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  19. The need to approximate the use-case in clinical machine learning

    PubMed Central

    Saeb, Sohrab; Jayaraman, Arun; Mohr, David C.; Kording, Konrad P.

    2017-01-01

    Abstract The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map those data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is vital to reliably quantify their prediction accuracy. Cross-validation (CV) is the standard approach where the accuracy of such algorithms is evaluated on part of the data the algorithm has not seen during training. However, for this procedure to be meaningful, the relationship between the training and the validation set should mimic the relationship between the training set and the dataset expected for the clinical use. Here we compared two popular CV methods: record-wise and subject-wise. While the subject-wise method mirrors the clinically relevant use-case scenario of diagnosis in newly recruited subjects, the record-wise strategy has no such interpretation. Using both a publicly available dataset and a simulation, we found that record-wise CV often massively overestimates the prediction accuracy of the algorithms. We also conducted a systematic review of the relevant literature, and found that this overly optimistic method was used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning-based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as inaccurate results can mislead both clinicians and data scientists. PMID:28327985

  20. The GF-3 SAR Data Processor

    PubMed Central

    Han, Bing; Ding, Chibiao; Zhong, Lihua; Liu, Jiayin; Qiu, Xiaolan; Hu, Yuxin; Lei, Bin

    2018-01-01

    The Gaofen-3 (GF-3) data processor was developed as a workstation-based GF-3 synthetic aperture radar (SAR) data processing system. The processor consists of two vital subsystems of the GF-3 ground segment, which are referred to as data ingesting subsystem (DIS) and product generation subsystem (PGS). The primary purpose of DIS is to record and catalogue GF-3 raw data with a transferring format, and PGS is to produce slant range or geocoded imagery from the signal data. This paper presents a brief introduction of the GF-3 data processor, including descriptions of the system architecture, the processing algorithms and its output format. PMID:29534464

  1. An Energy Efficient Simultaneous-Node Repositioning Algorithm for Mobile Sensor Networks

    PubMed Central

    Hasbullah, Halabi; Nazir, Babar; Khan, Imran Ali

    2014-01-01

    Recently, wireless sensor network (WSN) applications have seen an increase in interest. In search and rescue, battlefield reconnaissance, and some other such applications, so that a survey of the area of interest can be made collectively, a set of mobile nodes is deployed. Keeping the network nodes connected is vital for WSNs to be effective. The provision of connectivity can be made at the time of startup and can be maintained by carefully coordinating the nodes when they move. However, if a node suddenly fails, the network could be partitioned to cause communication problems. Recently, several methods that use the relocation of nodes for connectivity restoration have been proposed. However, these methods have the tendency to not consider the potential coverage loss in some locations. This paper addresses the concerns of both connectivity and coverage in an integrated way so that this gap can be filled. A novel algorithm for simultaneous-node repositioning is introduced. In this approach, each neighbour of the failed node, one by one, moves in for a certain amount of time to take the place of the failed node, after which it returns to its original location in the network. The effectiveness of this algorithm has been verified by the simulation results. PMID:25152924

  2. Conformational B-cell epitopes prediction from sequences using cost-sensitive ensemble classifiers and spatial clustering.

    PubMed

    Zhang, Jian; Zhao, Xiaowei; Sun, Pingping; Gao, Bo; Ma, Zhiqiang

    2014-01-01

    B-cell epitopes are regions of the antigen surface which can be recognized by certain antibodies and elicit the immune response. Identification of epitopes for a given antigen chain finds vital applications in vaccine and drug research. Experimental prediction of B-cell epitopes is time-consuming and resource intensive, which may benefit from the computational approaches to identify B-cell epitopes. In this paper, a novel cost-sensitive ensemble algorithm is proposed for predicting the antigenic determinant residues and then a spatial clustering algorithm is adopted to identify the potential epitopes. Firstly, we explore various discriminative features from primary sequences. Secondly, cost-sensitive ensemble scheme is introduced to deal with imbalanced learning problem. Thirdly, we adopt spatial algorithm to tell which residues may potentially form the epitopes. Based on the strategies mentioned above, a new predictor, called CBEP (conformational B-cell epitopes prediction), is proposed in this study. CBEP achieves good prediction performance with the mean AUC scores (AUCs) of 0.721 and 0.703 on two benchmark datasets (bound and unbound) using the leave-one-out cross-validation (LOOCV). When compared with previous prediction tools, CBEP produces higher sensitivity and comparable specificity values. A web server named CBEP which implements the proposed method is available for academic use.

  3. Development and validation of a machine learning algorithm and hybrid system to predict the need for life-saving interventions in trauma patients.

    PubMed

    Liu, Nehemiah T; Holcomb, John B; Wade, Charles E; Batchinsky, Andriy I; Cancio, Leopoldo C; Darrah, Mark I; Salinas, José

    2014-02-01

    Accurate and effective diagnosis of actual injury severity can be problematic in trauma patients. Inherent physiologic compensatory mechanisms may prevent accurate diagnosis and mask true severity in many circumstances. The objective of this project was the development and validation of a multiparameter machine learning algorithm and system capable of predicting the need for life-saving interventions (LSIs) in trauma patients. Statistics based on means, slopes, and maxima of various vital sign measurements corresponding to 79 trauma patient records generated over 110,000 feature sets, which were used to develop, train, and implement the system. Comparisons among several machine learning models proved that a multilayer perceptron would best implement the algorithm in a hybrid system consisting of a machine learning component and basic detection rules. Additionally, 295,994 feature sets from 82 h of trauma patient data showed that the system can obtain 89.8 % accuracy within 5 min of recorded LSIs. Use of machine learning technologies combined with basic detection rules provides a potential approach for accurately assessing the need for LSIs in trauma patients. The performance of this system demonstrates that machine learning technology can be implemented in a real-time fashion and potentially used in a critical care environment.

  4. Energy-Efficient ZigBee-Based Wireless Sensor Network for Track Bicycle Performance Monitoring

    PubMed Central

    Gharghan, Sadik K.; Nordin, Rosdiadee; Ismail, Mahamod

    2014-01-01

    In a wireless sensor network (WSN), saving power is a vital requirement. In this paper, a simple point-to-point bike WSN was considered. The data of bike parameters, speed and cadence, were monitored and transmitted via a wireless communication based on the ZigBee protocol. Since the bike parameters are monitored and transmitted on every bike wheel rotation, this means the sensor node does not sleep for a long time, causing power consumption to rise. Therefore, a newly proposed algorithm, known as the Redundancy and Converged Data (RCD) algorithm, was implemented for this application to put the sensor node into sleep mode while maintaining the performance measurements. This is achieved by minimizing the data packets transmitted as much as possible and fusing the data of speed and cadence by utilizing the correlation measurements between them to minimize the number of sensor nodes in the network to one node, which results in reduced power consumption, cost, and size, in addition to simpler hardware implementation. Execution of the proposed RCD algorithm shows that this approach can reduce the current consumption to 1.69 mA, and save 95% of the sensor node energy. Also, the comparison results with different wireless standard technologies demonstrate minimal current consumption in the sensor node. PMID:25153141

  5. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  6. Energy-efficient ZigBee-based wireless sensor network for track bicycle performance monitoring.

    PubMed

    Gharghan, Sadik K; Nordin, Rosdiadee; Ismail, Mahamod

    2014-08-22

    In a wireless sensor network (WSN), saving power is a vital requirement. In this paper, a simple point-to-point bike WSN was considered. The data of bike parameters, speed and cadence, were monitored and transmitted via a wireless communication based on the ZigBee protocol. Since the bike parameters are monitored and transmitted on every bike wheel rotation, this means the sensor node does not sleep for a long time, causing power consumption to rise. Therefore, a newly proposed algorithm, known as the Redundancy and Converged Data (RCD) algorithm, was implemented for this application to put the sensor node into sleep mode while maintaining the performance measurements. This is achieved by minimizing the data packets transmitted as much as possible and fusing the data of speed and cadence by utilizing the correlation measurements between them to minimize the number of sensor nodes in the network to one node, which results in reduced power consumption, cost, and size, in addition to simpler hardware implementation. Execution of the proposed RCD algorithm shows that this approach can reduce the current consumption to 1.69 mA, and save 95% of the sensor node energy. Also, the comparison results with different wireless standard technologies demonstrate minimal current consumption in the sensor node.

  7. Performance of coded MFSK in a Rician fading channel. [Multiple Frequency Shift Keyed modulation

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.

    1975-01-01

    The performance of convolutional codes in conjunction with noncoherent multiple frequency shift-keyed (MFSK) modulation and Viterbi maximum likelihood decoding on a Rician fading channel is examined in detail. While the primary motivation underlying this work has been concerned with system performance on the planetary entry channel, it is expected that the results are of considerably wider interest. Particular attention is given to modeling the channel in terms of a few meaningful parameters which can be correlated closely with the results of theoretical propagation studies. Fairly general upper bounds on bit error probability performance in the presence of fading are derived and compared with simulation results using both unquantized and quantized receiver outputs. The effects of receiver quantization and channel memory are investigated and it is concluded that the coded noncoherent MFSK system offers an attractive alternative to coherent BPSK in providing reliable low data rate communications in fading channels typical of planetary entry missions.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey, D.; Ryan, W.; Ross, M.

    A method for more efficiently utilizing the frequency bandwidth allocated for data transmission is presented. Current space and range communication systems use modulation and coding schemes that transmit 0.5 to 1.0 bits per second per Hertz of radio frequency bandwidth. The goal in this LDRD project is to increase the bandwidth utilization by employing advanced digital communications techniques. This is done with little or no increase in the transmit power which is usually very limited on airborne systems. Teaming with New Mexico State University, an implementation of trellis coded modulation (TCM), a coding and modulation scheme pioneered by Ungerboeck, wasmore » developed for this application and simulated on a computer. TCM provides a means for reliably transmitting data while simultaneously increasing bandwidth efficiency. The penalty is increased receiver complexity. In particular, the trellis decoder requires high-speed, application-specific digital signal processing (DSP) chips. A system solution based on the QualComm Viterbi decoder and the Graychip DSP receiver chips is presented.« less

  9. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  10. Impact of jammer side information on the performance of anti-jam systems

    NASA Astrophysics Data System (ADS)

    Lim, Samuel

    1992-03-01

    The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.

  11. Capacity, cutoff rate, and coding for a direct-detection optical channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1980-01-01

    It is shown that Pierce's pulse position modulation scheme with 2 to the L pulse positions used on a self-noise-limited direct detection optical communication channel results in a 2 to the L-ary erasure channel that is equivalent to the parallel combination of L completely correlated binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint-length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.

  12. A spread-spectrum modem using constant envelope BPSK for a mobile satellite communications terminal

    NASA Technical Reports Server (NTRS)

    Iizuka, N.; Yamashita, A.; Takenaka, S.; Morikawa, E.; Ikegami, T.

    1990-01-01

    This paper describes a 5-kilobit/s spread spectrum modem with a 1.275 mega-Hz chip rate for mobile satellite communications. We used a Viterbi decoder with a coding gain of 7.8 dB at a BER of 10(exp -5) to decrease the required receiver power. This reduces the cost of communication services. The spread spectrum technique makes the modem immune to terrestrial radio signals and keeps it from causing interference in terrestrial radio systems. A class C power amplifier reduces the modem's power consumption. To avoid nonlinear distortion caused by the amplifier, the envelope of the input signal is kept constant by adding quadrature channel signal to the BPSK signal. To simulate the worst case, we measured the modem's output spectrum using a limiting amplifier instead of the class C amplifier, and found that 99 percent of the spectral power was confined to the specified 2.55 mega-Hz bandwidth.

  13. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  14. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    PubMed

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  15. Machine learning in cardiovascular medicine: are we there yet?

    PubMed

    Shameer, Khader; Johnson, Kipp W; Glicksberg, Benjamin S; Dudley, Joel T; Sengupta, Partho P

    2018-01-19

    Artificial intelligence (AI) broadly refers to analytical algorithms that iteratively learn from data, allowing computers to find hidden insights without being explicitly programmed where to look. These include a family of operations encompassing several terms like machine learning, cognitive learning, deep learning and reinforcement learning-based methods that can be used to integrate and interpret complex biomedical and healthcare data in scenarios where traditional statistical methods may not be able to perform. In this review article, we discuss the basics of machine learning algorithms and what potential data sources exist; evaluate the need for machine learning; and examine the potential limitations and challenges of implementing machine in the context of cardiovascular medicine. The most promising avenues for AI in medicine are the development of automated risk prediction algorithms which can be used to guide clinical care; use of unsupervised learning techniques to more precisely phenotype complex disease; and the implementation of reinforcement learning algorithms to intelligently augment healthcare providers. The utility of a machine learning-based predictive model will depend on factors including data heterogeneity, data depth, data breadth, nature of modelling task, choice of machine learning and feature selection algorithms, and orthogonal evidence. A critical understanding of the strength and limitations of various methods and tasks amenable to machine learning is vital. By leveraging the growing corpus of big data in medicine, we detail pathways by which machine learning may facilitate optimal development of patient-specific models for improving diagnoses, intervention and outcome in cardiovascular medicine. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Semi-automatic Data Integration using Karma

    NASA Astrophysics Data System (ADS)

    Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.

    2017-12-01

    Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.

  17. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  18. Systems Biological Approach of Molecular Descriptors Connectivity: Optimal Descriptors for Oral Bioavailability Prediction

    PubMed Central

    Ahmed, Shiek S. S. J.; Ramakrishnan, V.

    2012-01-01

    Background Poor oral bioavailability is an important parameter accounting for the failure of the drug candidates. Approximately, 50% of developing drugs fail because of unfavorable oral bioavailability. In silico prediction of oral bioavailability (%F) based on physiochemical properties are highly needed. Although many computational models have been developed to predict oral bioavailability, their accuracy remains low with a significant number of false positives. In this study, we present an oral bioavailability model based on systems biological approach, using a machine learning algorithm coupled with an optimal discriminative set of physiochemical properties. Results The models were developed based on computationally derived 247 physicochemical descriptors from 2279 molecules, among which 969, 605 and 705 molecules were corresponds to oral bioavailability, intestinal absorption (HIA) and caco-2 permeability data set, respectively. The partial least squares discriminate analysis showed 49 descriptors of HIA and 50 descriptors of caco-2 are the major contributing descriptors in classifying into groups. Of these descriptors, 47 descriptors were commonly associated to HIA and caco-2, which suggests to play a vital role in classifying oral bioavailability. To determine the best machine learning algorithm, 21 classifiers were compared using a bioavailability data set of 969 molecules with 47 descriptors. Each molecule in the data set was represented by a set of 47 physiochemical properties with the functional relevance labeled as (+bioavailability/−bioavailability) to indicate good-bioavailability/poor-bioavailability molecules. The best-performing algorithm was the logistic algorithm. The correlation based feature selection (CFS) algorithm was implemented, which confirms that these 47 descriptors are the fundamental descriptors for oral bioavailability prediction. Conclusion The logistic algorithm with 47 selected descriptors correctly predicted the oral bioavailability, with a predictive accuracy of more than 71%. Overall, the method captures the fundamental molecular descriptors, that can be used as an entity to facilitate prediction of oral bioavailability. PMID:22815781

  19. Systems biological approach of molecular descriptors connectivity: optimal descriptors for oral bioavailability prediction.

    PubMed

    Ahmed, Shiek S S J; Ramakrishnan, V

    2012-01-01

    Poor oral bioavailability is an important parameter accounting for the failure of the drug candidates. Approximately, 50% of developing drugs fail because of unfavorable oral bioavailability. In silico prediction of oral bioavailability (%F) based on physiochemical properties are highly needed. Although many computational models have been developed to predict oral bioavailability, their accuracy remains low with a significant number of false positives. In this study, we present an oral bioavailability model based on systems biological approach, using a machine learning algorithm coupled with an optimal discriminative set of physiochemical properties. The models were developed based on computationally derived 247 physicochemical descriptors from 2279 molecules, among which 969, 605 and 705 molecules were corresponds to oral bioavailability, intestinal absorption (HIA) and caco-2 permeability data set, respectively. The partial least squares discriminate analysis showed 49 descriptors of HIA and 50 descriptors of caco-2 are the major contributing descriptors in classifying into groups. Of these descriptors, 47 descriptors were commonly associated to HIA and caco-2, which suggests to play a vital role in classifying oral bioavailability. To determine the best machine learning algorithm, 21 classifiers were compared using a bioavailability data set of 969 molecules with 47 descriptors. Each molecule in the data set was represented by a set of 47 physiochemical properties with the functional relevance labeled as (+bioavailability/-bioavailability) to indicate good-bioavailability/poor-bioavailability molecules. The best-performing algorithm was the logistic algorithm. The correlation based feature selection (CFS) algorithm was implemented, which confirms that these 47 descriptors are the fundamental descriptors for oral bioavailability prediction. The logistic algorithm with 47 selected descriptors correctly predicted the oral bioavailability, with a predictive accuracy of more than 71%. Overall, the method captures the fundamental molecular descriptors, that can be used as an entity to facilitate prediction of oral bioavailability.

  20. Clinical algorithms for the diagnosis and prognosis of interstitial lung disease in systemic sclerosis.

    PubMed

    Hax, Vanessa; Bredemeier, Markus; Didonet Moro, Ana Laura; Pavan, Thaís Rohde; Vieira, Marcelo Vasconcellos; Pitrez, Eduardo Hennemann; da Silva Chakr, Rafael Mendonça; Xavier, Ricardo Machado

    2017-10-01

    Interstitial lung disease (ILD) is currently the primary cause of death in systemic sclerosis (SSc). Thoracic high-resolution computed tomography (HRCT) is considered the gold standard for diagnosis. Recent studies have proposed several clinical algorithms to predict the diagnosis and prognosis of SSc-ILD. To test the clinical algorithms to predict the presence and prognosis of SSc-ILD and to evaluate the association of extent of ILD with mortality in a cohort of SSc patients. Retrospective cohort study, including 177 SSc patients assessed by clinical evaluation, laboratory tests, pulmonary function tests, and HRCT. Three clinical algorithms, combining lung auscultation, chest radiography, and percentage predicted forced vital capacity (FVC), were applied for the diagnosis of different extents of ILD on HRCT. Univariate and multivariate Cox proportional models were used to analyze the association of algorithms and the extent of ILD on HRCT with the risk of death using hazard ratios (HR). The prevalence of ILD on HRCT was 57.1% and 79 patients died (44.6%) in a median follow-up of 11.1 years. For identification of ILD with extent ≥10% and ≥20% on HRCT, all algorithms presented a high sensitivity (>89%) and a very low negative likelihood ratio (<0.16). For prognosis, survival was decreased for all algorithms, especially the algorithm C (HR = 3.47, 95% CI: 1.62-7.42), which identified the presence of ILD based on crackles on lung auscultation, findings on chest X-ray, or FVC <80%. Extensive disease as proposed by Goh et al. (extent of ILD > 20% on HRCT or, in indeterminate cases, FVC < 70%) had a significantly higher risk of death (HR = 3.42, 95% CI: 2.12-5.52). Survival was not different between patients with extent of 10% or 20% of ILD on HRCT, and analysis of 10-year mortality suggested that a threshold of 10% may also have a good predictive value for mortality. However, there is no clear cutoff above which mortality is sharply increased. Clinical algorithms had a good diagnostic performance for extents of SSc-ILD on HRCT with clinical and prognostic relevance (≥10% and ≥20%), and were also strongly related to mortality. Non-HRCT-based algorithms could be useful when HRCT is not available. This is the first study to replicate the prognostic algorithm proposed by Goh et al. in a developing country. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. The need to approximate the use-case in clinical machine learning.

    PubMed

    Saeb, Sohrab; Lonini, Luca; Jayaraman, Arun; Mohr, David C; Kording, Konrad P

    2017-05-01

    The availability of smartphone and wearable sensor technology is leading to a rapid accumulation of human subject data, and machine learning is emerging as a technique to map those data into clinical predictions. As machine learning algorithms are increasingly used to support clinical decision making, it is vital to reliably quantify their prediction accuracy. Cross-validation (CV) is the standard approach where the accuracy of such algorithms is evaluated on part of the data the algorithm has not seen during training. However, for this procedure to be meaningful, the relationship between the training and the validation set should mimic the relationship between the training set and the dataset expected for the clinical use. Here we compared two popular CV methods: record-wise and subject-wise. While the subject-wise method mirrors the clinically relevant use-case scenario of diagnosis in newly recruited subjects, the record-wise strategy has no such interpretation. Using both a publicly available dataset and a simulation, we found that record-wise CV often massively overestimates the prediction accuracy of the algorithms. We also conducted a systematic review of the relevant literature, and found that this overly optimistic method was used by almost half of the retrieved studies that used accelerometers, wearable sensors, or smartphones to predict clinical outcomes. As we move towards an era of machine learning-based diagnosis and treatment, using proper methods to evaluate their accuracy is crucial, as inaccurate results can mislead both clinicians and data scientists. © The Author 2017. Published by Oxford University Press.

  2. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    NASA Astrophysics Data System (ADS)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  3. Determining Surface Roughness in Urban Areas Using Lidar Data

    NASA Technical Reports Server (NTRS)

    Holland, Donald

    2009-01-01

    An automated procedure has been developed to derive relevant factors, which can increase the ability to produce objective, repeatable methods for determining aerodynamic surface roughness. Aerodynamic surface roughness is used for many applications, like atmospheric dispersive models and wind-damage models. For this technique, existing lidar data was used that was originally collected for terrain analysis, and demonstrated that surface roughness values can be automatically derived, and then subsequently utilized in disaster-management and homeland security models. The developed lidar-processing algorithm effectively distinguishes buildings from trees and characterizes their size, density, orientation, and spacing (see figure); all of these variables are parameters that are required to calculate the estimated surface roughness for a specified area. By using this algorithm, aerodynamic surface roughness values in urban areas can then be extracted automatically. The user can also adjust the algorithm for local conditions and lidar characteristics, like summer/winter vegetation and dense/sparse lidar point spacing. Additionally, the user can also survey variations in surface roughness that occurs due to wind direction; for example, during a hurricane, when wind direction can change dramatically, this variable can be extremely significant. In its current state, the algorithm calculates an estimated surface roughness for a square kilometer area; techniques using the lidar data to calculate the surface roughness for a point, whereby only roughness elements that are upstream from the point of interest are used and the wind direction is a vital concern, are being investigated. This technological advancement will improve the reliability and accuracy of models that use and incorporate surface roughness.

  4. Registration of 3D spectral OCT volumes combining ICP with a graph-based approach

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan

    2012-02-01

    The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.

  5. Experimental application of OMA solutions on the model of industrial structure

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mironovs, D.

    2017-10-01

    It is very important and sometimes even vital to maintain reliability of industrial structures. High quality control during production and structural health monitoring (SHM) in exploitation provides reliable functioning of large, massive and remote structures, like wind generators, pipelines, power line posts, etc. This paper introduces a complex of technological and methodical solutions for SHM and diagnostics of industrial structures, including those that are actuated by periodic forces. Solutions were verified on a wind generator scaled model with integrated system of piezo-film deformation sensors. Simultaneous and multi-patch Operational Modal Analysis (OMA) approaches were implemented as methodical means for structural diagnostics and monitoring. Specially designed data processing algorithms provide objective evaluation of structural state modification.

  6. Gene Prioritization of Resistant Rice Gene against Xanthomas oryzae pv. oryzae by Using Text Mining Technologies

    PubMed Central

    Xia, Jingbo; Zhang, Xing; Yuan, Daojun; Chen, Lingling; Webster, Jonathan; Fang, Alex Chengyu

    2013-01-01

    To effectively assess the possibility of the unknown rice protein resistant to Xanthomonas oryzae pv. oryzae, a hybrid strategy is proposed to enhance gene prioritization by combining text mining technologies with a sequence-based approach. The text mining technique of term frequency inverse document frequency is used to measure the importance of distinguished terms which reflect biomedical activity in rice before candidate genes are screened and vital terms are produced. Afterwards, a built-in classifier under the chaos games representation algorithm is used to sieve the best possible candidate gene. Our experiment results show that the combination of these two methods achieves enhanced gene prioritization. PMID:24371834

  7. On the development of an interactive resource information management system for analysis and display of spatiotemporal data

    NASA Technical Reports Server (NTRS)

    Schell, J. A.

    1974-01-01

    The recent availability of timely synoptic earth imagery from the Earth Resources Technology Satellites (ERTS) provides a wealth of information for the monitoring and management of vital natural resources. Formal language definitions and syntax interpretation algorithms were adapted to provide a flexible, computer information system for the maintenance of resource interpretation of imagery. These techniques are incorporated, together with image analysis functions, into an Interactive Resource Information Management and Analysis System, IRIMAS, which is implemented on a Texas Instruments 980A minicomputer system augmented with a dynamic color display for image presentation. A demonstration of system usage and recommendations for further system development are also included.

  8. Demonstration of the James Webb Space Telescope commissioning on the JWST testbed telescope

    NASA Astrophysics Data System (ADS)

    Acton, D. Scott; Towell, Timothy; Schwenker, John; Swensen, John; Shields, Duncan; Sabatke, Erin; Klingemann, Lana; Contos, Adam R.; Bauer, Brian; Hansen, Karl; Atcheson, Paul D.; Redding, David; Shi, Fang; Basinger, Scott; Dean, Bruce; Burns, Laura

    2006-06-01

    The one-meter Testbed Telescope (TBT) has been developed at Ball Aerospace to facilitate the design and implementation of the wavefront sensing and control (WFS&C) capabilities of the James Webb Space Telescope (JWST). The TBT is used to develop and verify the WFS&C algorithms, check the communication interfaces, validate the WFS&C optical components and actuators, and provide risk reduction opportunities for test approaches for later full-scale cryogenic vacuum testing of the observatory. In addition, the TBT provides a vital opportunity to demonstrate the entire WFS&C commissioning process. This paper describes recent WFS&C commissioning experiments that have been performed on the TBT.

  9. Research on the position estimation of human movement based on camera projection

    NASA Astrophysics Data System (ADS)

    Yi, Zhang; Yuan, Luo; Hu, Huosheng

    2005-06-01

    During the rehabilitation process of the post-stroke patients is conducted, their movements need to be localized and learned so that incorrect movement can be instantly modified or tuned. Therefore, tracking these movement becomes vital and necessary for the rehabilitative course. During human movement tracking, the position estimation of human movement is very important. In this paper, the character of the human movement system is first analyzed. Next, camera and inertial sensor are used to respectively measure the position of human movement, and the Kalman filter algorithm is proposed to fuse the two measurement to get a optimization estimation of the position. In the end, the performance of the method is analyzed.

  10. Gene prioritization of resistant rice gene against Xanthomas oryzae pv. oryzae by using text mining technologies.

    PubMed

    Xia, Jingbo; Zhang, Xing; Yuan, Daojun; Chen, Lingling; Webster, Jonathan; Fang, Alex Chengyu

    2013-01-01

    To effectively assess the possibility of the unknown rice protein resistant to Xanthomonas oryzae pv. oryzae, a hybrid strategy is proposed to enhance gene prioritization by combining text mining technologies with a sequence-based approach. The text mining technique of term frequency inverse document frequency is used to measure the importance of distinguished terms which reflect biomedical activity in rice before candidate genes are screened and vital terms are produced. Afterwards, a built-in classifier under the chaos games representation algorithm is used to sieve the best possible candidate gene. Our experiment results show that the combination of these two methods achieves enhanced gene prioritization.

  11. The use of a prescription drug monitoring program to develop algorithms to identify providers with unusual prescribing practices for controlled substances.

    PubMed

    Ringwalt, Christopher; Schiro, Sharon; Shanahan, Meghan; Proescholdbell, Scott; Meder, Harold; Austin, Anna; Sachdeva, Nidhi

    2015-10-01

    The misuse, abuse and diversion of controlled substances have reached epidemic proportion in the United States. Contributing to this problem are providers who over-prescribe these substances. Using one state's prescription drug monitoring program, we describe a series of metrics we developed to identify providers manifesting unusual and uncustomary prescribing practices. We then present the results of a preliminary effort to assess the concurrent validity of these algorithms, using death records from the state's vital records database pertaining to providers who wrote prescriptions to patients who then died of a medication or drug overdose within 30 days. Metrics manifesting the strongest concurrent validity with providers identified from these records related to those who co-prescribed benzodiazepines (e.g., valium) and high levels of opioid analgesics (e.g., oxycodone), as well as those who wrote temporally overlapping prescriptions. We conclude with a discussion of a variety of uses to which these metrics may be put, as well as problems and opportunities related to their use.

  12. Development and evaluation of a new contoured cushion system with an optimized normalization algorithm.

    PubMed

    Li, Sujiao; Zhang, Zhengxiang; Wang, Jue

    2014-01-01

    Prevention of pressure sores remains a significant problem confronting spinal cord injury patients and the elderly with limited mobility. One vital aspect of this subject concerns the development of cushions to decrease pressure ulcers for seated patients, particularly those bound by wheelchairs. Here, we present a novel cushion system that employs interface pressure distribution between the cushion and the buttocks to design custom contoured foam cushion. An optimized normalization algorithm was proposed, with which interface pressure distribution was transformed into the carving depth of foam cushions according to the biomechanical characteristics of the foam. The shape and pressure-relief performance of the custom contoured foam cushions was investigated. The outcomes showed that the contoured shape of personalized cushion matched the buttock contour very well. Moreover, the custom contoured cushion could alleviate pressure under buttocks and increase subjective comfort and stability significantly. Furthermore, the fabricating method not only decreased the unit production cost but also simplified the procedure for manufacturing. All in all, this prototype seat cushion would be an effective and economical way to prevent pressure ulcers.

  13. Dynamic Self-adaptive Remote Health Monitoring System for Diabetics

    PubMed Central

    Suh, Myung-kyung; Moin, Tannaz; Woodbridge, Jonathan; Lan, Mars; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid

    2016-01-01

    Diabetes is the seventh leading cause of death in the United States. In 2010, about 1.9 million new cases of diabetes were diagnosed in people aged 20 years or older. Remote health monitoring systems can help diabetics and their healthcare professionals monitor health-related measurements by providing real-time feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the remote health monitoring. This paper presents a task optimization technique used in WANDA (Weight and Activity with Blood Pressure and Other Vital Signs); a wireless health project that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. WANDA applies data analytics in real-time to improving the quality of care. The developed algorithm minimizes the number of daily tasks required by diabetic patients using association rules that satisfies a minimum support threshold. Each of these tasks maximizes information gain, thereby improving the overall level of care. Experimental results show that the developed algorithm can reduce the number of tasks up to 28.6% with minimum support 0.95, minimum confidence 0.97 and high efficiency. PMID:23366365

  14. Airline Passenger Profiling Based on Fuzzy Deep Machine Learning.

    PubMed

    Zheng, Yu-Jun; Sheng, Wei-Guo; Sun, Xing-Ming; Chen, Sheng-Yong

    2017-12-01

    Passenger profiling plays a vital part of commercial aviation security, but classical methods become very inefficient in handling the rapidly increasing amounts of electronic records. This paper proposes a deep learning approach to passenger profiling. The center of our approach is a Pythagorean fuzzy deep Boltzmann machine (PFDBM), whose parameters are expressed by Pythagorean fuzzy numbers such that each neuron can learn how a feature affects the production of the correct output from both the positive and negative sides. We propose a hybrid algorithm combining a gradient-based method and an evolutionary algorithm for training the PFDBM. Based on the novel learning model, we develop a deep neural network (DNN) for classifying normal passengers and potential attackers, and further develop an integrated DNN for identifying group attackers whose individual features are insufficient to reveal the abnormality. Experiments on data sets from Air China show that our approach provides much higher learning ability and classification accuracy than existing profilers. It is expected that the fuzzy deep learning approach can be adapted for a variety of complex pattern analysis tasks.

  15. Skeletonization and Partitioning of Digital Images Using Discrete Morse Theory.

    PubMed

    Delgado-Friedrichs, Olaf; Robins, Vanessa; Sheppard, Adrian

    2015-03-01

    We show how discrete Morse theory provides a rigorous and unifying foundation for defining skeletons and partitions of grayscale digital images. We model a grayscale image as a cubical complex with a real-valued function defined on its vertices (the voxel values). This function is extended to a discrete gradient vector field using the algorithm presented in Robins, Wood, Sheppard TPAMI 33:1646 (2011). In the current paper we define basins (the building blocks of a partition) and segments of the skeleton using the stable and unstable sets associated with critical cells. The natural connection between Morse theory and homology allows us to prove the topological validity of these constructions; for example, that the skeleton is homotopic to the initial object. We simplify the basins and skeletons via Morse-theoretic cancellation of critical cells in the discrete gradient vector field using a strategy informed by persistent homology. Simple working Python code for our algorithms for efficient vector field traversal is included. Example data are taken from micro-CT images of porous materials, an application area where accurate topological models of pore connectivity are vital for fluid-flow modelling.

  16. Fast precalculated triangular mesh algorithm for 3D binary computer-generated holograms.

    PubMed

    Yang, Fan; Kaczorowski, Andrzej; Wilkinson, Tim D

    2014-12-10

    A new method for constructing computer-generated holograms using a precalculated triangular mesh is presented. The speed of calculation can be increased dramatically by exploiting both the precalculated base triangle and GPU parallel computing. Unlike algorithms using point-based sources, this method can reconstruct a more vivid 3D object instead of a "hollow image." In addition, there is no need to do a fast Fourier transform for each 3D element every time. A ferroelectric liquid crystal spatial light modulator is used to display the binary hologram within our experiment and the hologram of a base right triangle is produced by utilizing just a one-step Fourier transform in the 2D case, which can be expanded to the 3D case by multiplying by a suitable Fresnel phase plane. All 3D holograms generated in this paper are based on Fresnel propagation; thus, the Fresnel plane is treated as a vital element in producing the hologram. A GeForce GTX 770 graphics card with 2 GB memory is used to achieve parallel computing.

  17. Iris biometric system design using multispectral imaging

    NASA Astrophysics Data System (ADS)

    Widhianto, Benedictus Yohanes Bagus Y. B.; Nasution, Aulia M. T.

    2016-11-01

    An identity recognition system is a vital component that cannot be separated from life, iris biometric is one of the biometric that has the best accuracy reaching 99%. Usually, iris biometric systems use infrared spectrum lighting to reduce discomfort caused by radiation when the eye is given direct light, while the eumelamin that is forming the iris has the most flourescent radiation when given a spectrum of visible light. This research will be conducted by detecting iris wavelengths of 850 nm, 560 nm, and 590 nm, where the detection algorithm will be using Daugman algorithm by using a Gabor wavelet extraction feature, and matching feature using a Hamming distance. Results generated will be analyzed to identify how much differences there are, and to improve the accuracy of the multispectral biometric system and as a detector of the authenticity of the iris. The results obtained from the analysis of wavelengths 850 nm, 560 nm, and 590 nm respectively has an accuracy of 99,35 , 97,5 , 64,5 with a matching score of 0,26 , 0,23 , 0,37.

  18. Enhancement and Validation of an Arab Surname Database

    PubMed Central

    Schwartz, Kendra; Beebani, Ganj; Sedki, Mai; Tahhan, Mamon; Ruterbusch, Julie J.

    2015-01-01

    Objectives Arab Americans constitute a large, heterogeneous, and quickly growing subpopulation in the United States. Health statistics for this group are difficult to find because US governmental offices do not recognize Arab as separate from white. The development and validation of an Arab- and Chaldean-American name database will enhance research efforts in this population subgroup. Methods A previously validated name database was supplemented with newly identified names gathered primarily from vital statistic records and then evaluated using a multistep process. This process included 1) review by 4 Arabic- and Chaldean-speaking reviewers, 2) ethnicity assessment by social media searches, and 3) self-report of ancestry obtained from a telephone survey. Results Our Arab- and Chaldean-American name algorithm has a positive predictive value of 91% and a negative predictive value of 100%. Conclusions This enhanced name database and algorithm can be used to identify Arab Americans in health statistics data, such as cancer and hospital registries, where they are often coded as white, to determine the extent of health disparities in this population. PMID:24625771

  19. Identifying 5-methylcytosine sites in RNA sequence using composite encoding feature into Chou's PseKNC.

    PubMed

    Sabooh, M Fazli; Iqbal, Nadeem; Khan, Mukhtaj; Khan, Muslim; Maqbool, H F

    2018-05-01

    This study examines accurate and efficient computational method for identification of 5-methylcytosine sites in RNA modification. The occurrence of 5-methylcytosine (m 5 C) plays a vital role in a number of biological processes. For better comprehension of the biological functions and mechanism it is necessary to recognize m 5 C sites in RNA precisely. The laboratory techniques and procedures are available to identify m 5 C sites in RNA, but these procedures require a lot of time and resources. This study develops a new computational method for extracting the features of RNA sequence. In this method, first the RNA sequence is encoded via composite feature vector, then, for the selection of discriminate features, the minimum-redundancy-maximum-relevance algorithm was used. Secondly, the classification method used has been based on a support vector machine by using jackknife cross validation test. The suggested method efficiently identifies m 5 C sites from non- m 5 C sites and the outcome of the suggested algorithm is 93.33% with sensitivity of 90.0 and specificity of 96.66 on bench mark datasets. The result exhibits that proposed algorithm shown significant identification performance compared to the existing computational techniques. This study extends the knowledge about the occurrence sites of RNA modification which paves the way for better comprehension of the biological uses and mechanism. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Region-of-interest image reconstruction with intensity weighting in circular cone-beam CT for image-guided radiation therapy

    PubMed Central

    Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A.; Pan, Xiaochuan

    2009-01-01

    Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy. PMID:19472624

  1. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  2. Integrating community-based verbal autopsy into civil registration and vital statistics (CRVS): system-level considerations

    PubMed Central

    de Savigny, Don; Riley, Ian; Chandramohan, Daniel; Odhiambo, Frank; Nichols, Erin; Notzon, Sam; AbouZahr, Carla; Mitra, Raj; Cobos Muñoz, Daniel; Firth, Sonja; Maire, Nicolas; Sankoh, Osman; Bronson, Gay; Setel, Philip; Byass, Peter; Jakob, Robert; Boerma, Ties; Lopez, Alan D.

    2017-01-01

    ABSTRACT Background: Reliable and representative cause of death (COD) statistics are essential to inform public health policy, respond to emerging health needs, and document progress towards Sustainable Development Goals. However, less than one-third of deaths worldwide are assigned a cause. Civil registration and vital statistics (CRVS) systems in low- and lower-middle-income countries are failing to provide timely, complete and accurate vital statistics, and it will still be some time before they can provide physician-certified COD for every death. Proposals: Verbal autopsy (VA) is a method to ascertain the probable COD and, although imperfect, it is the best alternative in the absence of medical certification. There is extensive experience with VA in research settings but only a few examples of its use on a large scale. Data collection using electronic questionnaires on mobile devices and computer algorithms to analyse responses and estimate probable COD have increased the potential for VA to be routinely applied in CRVS systems. However, a number of CRVS and health system integration issues should be considered in planning, piloting and implementing a system-wide intervention such as VA. These include addressing the multiplicity of stakeholders and sub-systems involved, integration with existing CRVS work processes and information flows, linking VA results to civil registration records, information technology requirements and data quality assurance. Conclusions: Integrating VA within CRVS systems is not simply a technical undertaking. It will have profound system-wide effects that should be carefully considered when planning for an effective implementation. This paper identifies and discusses the major system-level issues and emerging practices, provides a planning checklist of system-level considerations and proposes an overview for how VA can be integrated into routine CRVS systems. PMID:28137194

  3. Classification of nanoparticle diffusion processes in vital cells by a multifeature random forests approach: application to simulated data, darkfield, and confocal laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Wagner, Thorsten; Kroll, Alexandra; Wiemann, Martin; Lipinski, Hans-Gerd

    2016-04-01

    Darkfield and confocal laser scanning microscopy both allow for a simultaneous observation of live cells and single nanoparticles. Accordingly, a characterization of nanoparticle uptake and intracellular mobility appears possible within living cells. Single particle tracking makes it possible to characterize the particle and the surrounding cell. In case of free diffusion, the mean squared displacement for each trajectory of a nanoparticle can be measured which allows computing the corresponding diffusion coefficient and, if desired, converting it into the hydrodynamic diameter using the Stokes-Einstein equation and the viscosity of the fluid. However, within the more complex system of a cell's cytoplasm unrestrained diffusion is scarce and several other types of movements may occur. Thus, confined or anomalous diffusion (e.g. diffusion in porous media), active transport, and combinations thereof were described by several authors. To distinguish between these types of particle movement we developed an appropriate classification method, and simulated three types of particle motion in a 2D plane using a Monte Carlo approach: (1) normal diffusion, using random direction and step-length, (2) subdiffusion, using confinements like a reflective boundary with defined radius or reflective objects in the closer vicinity, and (3) superdiffusion, using a directed flow added to the normal diffusion. To simulate subdiffusion we devised a new method based on tracks of different length combined with equally probable obstacle interaction. Next we estimated the fractal dimension, elongation and the ratio of long-time / short-time diffusion coefficients. These features were used to train a random forests classification algorithm. The accuracy for simulated trajectories with 180 steps was 97% (95%-CI: 0.9481-0.9884). The balanced accuracy was 94%, 99% and 98% for normal-, sub- and superdiffusion, respectively. Nanoparticle tracking analysis was used with 100 nm polystyrene particles to get trajectories for normal diffusion. As a next step we identified diffusion types of nanoparticles in vital cells and incubated V79 fibroblasts with 50 nm gold nanoparticles, which appeared as intensely bright objects due to their surface plasmon resonance. The movement of particles in both the extracellular and intracellular space was observed by dark field and confocal laser scanning microscopy. After reducing background noise from the video it became possible to identify individual particle spots by a maximum detection algorithm and trace them using the robust single-particle tracking algorithm proposed by Jaqaman, which is able to handle motion heterogeneity and particle disappearance. The particle trajectories inside cells indicated active transport (superdiffusion) as well as subdiffusion. Eventually, the random forest classification algorithm, after being trained by the above simulations, successfully classified the trajectories observed in live cells.

  4. Using Supervised Machine Learning to Classify Real Alerts and Artifact in Online Multi-signal Vital Sign Monitoring Data

    PubMed Central

    Chen, Lujie; Dubrawski, Artur; Wang, Donghan; Fiterau, Madalina; Guillame-Bert, Mathieu; Bose, Eliezer; Kaynar, Ata M.; Wallace, David J.; Guttendorf, Jane; Clermont, Gilles; Pinsky, Michael R.; Hravnak, Marilyn

    2015-01-01

    OBJECTIVE Use machine-learning (ML) algorithms to classify alerts as real or artifacts in online noninvasive vital sign (VS) data streams to reduce alarm fatigue and missed true instability. METHODS Using a 24-bed trauma step-down unit’s non-invasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral oximetry [SpO2]) recorded at 1/20Hz, and noninvasive oscillometric blood pressure [BP] less frequently, we partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were VS deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained ML algorithms. The best model was evaluated on alerts in the test set to enact online alert classification as signals evolve over time. MAIN RESULTS The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve (AUC) performance of 0.79 (95% CI 0.67-0.93) for SpO2 at the instant the VS first crossed threshold and increased to 0.87 (95% CI 0.71-0.95) at 3 minutes into the alerting period. BP AUC started at 0.77 (95%CI 0.64-0.95) and increased to 0.87 (95% CI 0.71-0.98), while RR AUC started at 0.85 (95%CI 0.77-0.95) and increased to 0.97 (95% CI 0.94–1.00). HR alerts were too few for model development. CONCLUSIONS ML models can discern clinically relevant SpO2, BP and RR alerts from artifacts in an online monitoring dataset (AUC>0.87). PMID:26992068

  5. The development of a decision support system with an interactive clinical user interface for estimating treatment parameters in radiation therapy in order to reduce radiation dose in head and neck patients

    NASA Astrophysics Data System (ADS)

    Verma, Sneha; Liu, Joseph; Deshpande, Ruchi; DeMarco, John; Liu, Brent J.

    2017-03-01

    The primary goal in radiation therapy is to target the tumor with the maximum possible radiation dose while limiting the radiation exposure of the surrounding healthy tissues. However, in order to achieve an optimized treatment plan, many constraints, such as gender, age, tumor type, location, etc. need to be considered. The location of the malignant tumor with respect to the vital organs is another possible important factor for treatment planning process which can be quantified as a feature making it easier to analyze its effects. Incorporation of such features into the patient's medical history could provide additional knowledge that could lead to better treatment outcomes. To show the value of features such as relative locations of tumors and surrounding organs, the data is first processed in order to calculate the features and formulate a feature matrix. Then these feature are matched with retrospective cases with similar features to provide the clinician with insight on delivered dose in similar cases from past. This process provides a range of doses that can be delivered to the patient while limiting the radiation exposure of surrounding organs based on similar retrospective cases. As the number of patients increase, there will be an increase in computations needed for feature extraction as well as an increase in the workload for the physician to find the perfect dose amount. In order to show how such algorithms can be integrated we designed and developed a system with a streamlined workflow and interface as prototype for the clinician to test and explore. Integration of the tumor location feature with the clinician's experience and training could play a vital role in designing new treatment algorithms and better outcomes. Last year, we presented how multi-institutional data into a decision support system is incorporated. This year the presentation is focused on the interface and demonstration of the working prototype of informatics system.

  6. Quantitative evaluation of in vivo vital-dye fluorescence endoscopic imaging for the detection of Barrett’s-associated neoplasia

    PubMed Central

    Thekkek, Nadhi; Lee, Michelle H.; Polydorides, Alexandros D.; Rosen, Daniel G.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-01-01

    Abstract. Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett’s-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett’s-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett’s esophagus (BE), dysplasia, or esophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope (HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC=0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett’s-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance. PMID:25950645

  7. Quantitative evaluation of in vivo vital-dye fluorescence endoscopic imaging for the detection of Barrett's-associated neoplasia.

    PubMed

    Thekkek, Nadhi; Lee, Michelle H; Polydorides, Alexandros D; Rosen, Daniel G; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-05-01

    Current imaging tools are associated with inconsistent sensitivity and specificity for detection of Barrett's-associated neoplasia. Optical imaging has shown promise in improving the classification of neoplasia in vivo. The goal of this pilot study was to evaluate whether in vivo vital dye fluorescence imaging (VFI) has the potential to improve the accuracy of early-detection of Barrett's-associated neoplasia. In vivo endoscopic VFI images were collected from 65 sites in 14 patients with confirmed Barrett's esophagus (BE), dysplasia, oresophageal adenocarcinoma using a modular video endoscope and a high-resolution microendoscope(HRME). Qualitative image features were compared to histology; VFI and HRME images show changes in glandular structure associated with neoplastic progression. Quantitative image features in VFI images were identified for objective image classification of metaplasia and neoplasia, and a diagnostic algorithm was developed using leave-one-out cross validation. Three image features extracted from VFI images were used to classify tissue as neoplastic or not with a sensitivity of 87.8% and a specificity of 77.6% (AUC = 0.878). A multimodal approach incorporating VFI and HRME imaging can delineate epithelial changes present in Barrett's-associated neoplasia. Quantitative analysis of VFI images may provide a means for objective interpretation of BE during surveillance.

  8. Virtual file system for PSDS

    NASA Technical Reports Server (NTRS)

    Runnels, Tyson D.

    1993-01-01

    This is a case study. It deals with the use of a 'virtual file system' (VFS) for Boeing's UNIX-based Product Standards Data System (PSDS). One of the objectives of PSDS is to store digital standards documents. The file-storage requirements are that the files must be rapidly accessible, stored for long periods of time - as though they were paper, protected from disaster, and accumulative to about 80 billion characters (80 gigabytes). This volume of data will be approached in the first two years of the project's operation. The approach chosen is to install a hierarchical file migration system using optical disk cartridges. Files are migrated from high-performance media to lower performance optical media based on a least-frequency-used algorithm. The optical media are less expensive per character stored and are removable. Vital statistics about the removable optical disk cartridges are maintained in a database. The assembly of hardware and software acts as a single virtual file system transparent to the PSDS user. The files are copied to 'backup-and-recover' media whose vital statistics are also stored in the database. Seventeen months into operation, PSDS is storing 49 gigabytes. A number of operational and performance problems were overcome. Costs are under control. New and/or alternative uses for the VFS are being considered.

  9. An innovative nonintrusive driver assistance system for vital signal monitoring.

    PubMed

    Sun, Ye; Yu, Xiong Bill

    2014-11-01

    This paper describes an in-vehicle nonintrusive biopotential measurement system for driver health monitoring and fatigue detection. Previous research has found that the physiological signals including eye features, electrocardiography (ECG), electroencephalography (EEG) and their secondary parameters such as heart rate and HR variability are good indicators of health state as well as driver fatigue. A conventional biopotential measurement system requires the electrodes to be in contact with human body. This not only interferes with the driver operation, but also is not feasible for long-term monitoring purpose. The driver assistance system in this paper can remotely detect the biopotential signals with no physical contact with human skin. With delicate sensor and electronic design, ECG, EEG, and eye blinking can be measured. Experiments were conducted on a high fidelity driving simulator to validate the system performance. The system was found to be able to detect the ECG/EEG signals through cloth or hair with no contact with skin. Eye blinking activities can also be detected at a distance of 10 cm. Digital signal processing algorithms were developed to decimate the signal noise and extract the physiological features. The extracted features from the vital signals were further analyzed to assess the potential criterion for alertness and drowsiness determination.

  10. On decoding of multi-level MPSK modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  11. Shuttle S-band communications technical concepts

    NASA Technical Reports Server (NTRS)

    Seyl, J. W.; Seibert, W. W.; Porter, J. A.; Eggers, D. S.; Novosad, S. W.; Vang, H. A.; Lenett, S. D.; Lewton, W. A.; Pawlowski, J. F.

    1985-01-01

    Using the S-band communications system, shuttle orbiter can communicate directly with the Earth via the Ground Spaceflight Tracking and Data Network (GSTDN) or via the Tracking and Data Relay Satellite System (TDRSS). The S-band frequencies provide the primary links for direct Earth and TDRSS communications during all launch and entry/landing phases of shuttle missions. On orbit, S-band links are used when TDRSS Ku-band is not available, when conditions require orbiter attitudes unfavorable to Ku-band communications, or when the payload bay doors are closed. the S-band communications functional requirements, the orbiter hardware configuration, and the NASA S-band communications network are described. The requirements and implementation concepts which resulted in techniques for shuttle S-band hardware development discussed include: (1) digital voice delta modulation; (2) convolutional coding/Viterbi decoding; (3) critical modulation index for phase modulation using a Costas loop (phase-shift keying) receiver; (4) optimum digital data modulation parameters for continuous-wave frequency modulation; (5) intermodulation effects of subcarrier ranging and time-division multiplexing data channels; (6) radiofrequency coverage; and (7) despreading techniques under poor signal-to-noise conditions. Channel performance is reviewed.

  12. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Lost in Virtual Reality: Pathfinding Algorithms Detect Rock Fractures and Contacts in Point Clouds

    NASA Astrophysics Data System (ADS)

    Thiele, S.; Grose, L.; Micklethwaite, S.

    2016-12-01

    UAV-based photogrammetric and LiDAR techniques provide high resolution 3D point clouds and ortho-rectified photomontages that can capture surface geology in outstanding detail over wide areas. Automated and semi-automated methods are vital to extract full value from these data in practical time periods, though the nuances of geological structures and materials (natural variability in colour and geometry, soft and hard linkage, shadows and multiscale properties) make this a challenging task. We present a novel method for computer assisted trace detection in dense point clouds, using a lowest cost path solver to "follow" fracture traces and lithological contacts between user defined end points. This is achieved by defining a local neighbourhood network where each point in the cloud is linked to its neighbours, and then using a least-cost path algorithm to search this network and estimate the trace of the fracture or contact. A variety of different algorithms can then be applied to calculate the best fit plane, produce a fracture network, or map properties such as roughness, curvature and fracture intensity. Our prototype of this method (Fig. 1) suggests the technique is feasible and remarkably good at following traces under non-optimal conditions such as variable-shadow, partial occlusion and complex fracturing. Furthermore, if a fracture is initially mapped incorrectly, the user can easily provide further guidance by defining intermediate waypoints. Future development will include optimization of the algorithm to perform well on large point clouds and modifications that permit the detection of features such as step-overs. We also plan on implementing this approach in an interactive graphical user environment.

  14. Feasibility of a Smartphone-Based Exercise Program for Office Workers With Neck Pain: An Individualized Approach Using a Self-Classification Algorithm.

    PubMed

    Lee, Minyoung; Lee, Sang Heon; Kim, TaeYeong; Yoo, Hyun-Joon; Kim, Sung Hoon; Suh, Dong-Won; Son, Jaebum; Yoon, BumChul

    2017-01-01

    To explore the feasibility of a newly developed smartphone-based exercise program with an embedded self-classification algorithm for office workers with neck pain, by examining its effect on the pain intensity, functional disability, quality of life, fear avoidance, and cervical range of motion (ROM). Single-group, repeated-measures design. The laboratory and participants' home and work environments. Offices workers with neck pain (N=23; mean age ± SD, 28.13±2.97y; 13 men). Participants were classified as having 1 of 4 types of neck pain through a self-classification algorithm implemented as a smartphone application, and conducted corresponding exercise programs for 10 to 12min/d, 3d/wk, for 8 weeks. The visual analog scale (VAS), Neck Disability Index (NDI), Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36), Fear-Avoidance Beliefs Questionnaire (FABQ), and cervical ROM were measured at baseline and postintervention. The VAS (P<.001) and NDI score (P<.001) indicated significant improvements in pain intensity and functional disability. Quality of life showed significant improvements in the physical functioning (P=.007), bodily pain (P=.018), general health (P=.022), vitality (P=.046), and physical component scores (P=.002) of the SF-36. The FABQ, cervical ROM, and mental component score of the SF-36 showed no significant improvements. The smartphone-based exercise program with an embedded self-classification algorithm improves the pain intensity and perceived physical health of office workers with neck pain, although not enough to affect their mental and emotional states. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. Cross-Talk in Superconducting Transmon Quantum Computing Architecture

    NASA Astrophysics Data System (ADS)

    Abraham, David; Chow, Jerry; Corcoles, Antonio; Rothwell, Mary; Keefe, George; Gambetta, Jay; Steffen, Matthias; IBM Quantum Computing Team

    2013-03-01

    Superconducting transmon quantum computing test structures often exhibit significant undesired cross-talk. For experiments with only a handful of qubits this cross-talk can be quantified and understood, and therefore corrected. As quantum computing circuits become more complex, and thereby contain increasing numbers of qubits and resonators, it becomes more vital that the inadvertent coupling between these elements is minimized. The task of accurately controlling each single qubit to the level of precision required throughout the realization of a quantum algorithm is difficult by itself, but coupled with the need of nulling out leakage signals from neighboring qubits or resonators would quickly become impossible. We discuss an approach to solve this critical problem. We acknowledge support from IARPA under contract W911NF-10-1-0324.

  16. Application of the extreme learning machine algorithm for the prediction of monthly Effective Drought Index in eastern Australia

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Şahin, Mehmet

    2015-02-01

    The prediction of future drought is an effective mitigation tool for assessing adverse consequences of drought events on vital water resources, agriculture, ecosystems and hydrology. Data-driven model predictions using machine learning algorithms are promising tenets for these purposes as they require less developmental time, minimal inputs and are relatively less complex than the dynamic or physical model. This paper authenticates a computationally simple, fast and efficient non-linear algorithm known as extreme learning machine (ELM) for the prediction of Effective Drought Index (EDI) in eastern Australia using input data trained from 1957-2008 and the monthly EDI predicted over the period 2009-2011. The predictive variables for the ELM model were the rainfall and mean, minimum and maximum air temperatures, supplemented by the large-scale climate mode indices of interest as regression covariates, namely the Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and the Indian Ocean Dipole moment. To demonstrate the effectiveness of the proposed data-driven model a performance comparison in terms of the prediction capabilities and learning speeds was conducted between the proposed ELM algorithm and the conventional artificial neural network (ANN) algorithm trained with Levenberg-Marquardt back propagation. The prediction metrics certified an excellent performance of the ELM over the ANN model for the overall test sites, thus yielding Mean Absolute Errors, Root-Mean Square Errors, Coefficients of Determination and Willmott's Indices of Agreement of 0.277, 0.008, 0.892 and 0.93 (for ELM) and 0.602, 0.172, 0.578 and 0.92 (for ANN) models. Moreover, the ELM model was executed with learning speed 32 times faster and training speed 6.1 times faster than the ANN model. An improvement in the prediction capability of the drought duration and severity by the ELM model was achieved. Based on these results we aver that out of the two machine learning algorithms tested, the ELM was the more expeditious tool for prediction of drought and its related properties.

  17. Variables influencing wearable sensor outcome estimates in individuals with stroke and incomplete spinal cord injury: a pilot investigation validating two research grade sensors.

    PubMed

    Jayaraman, Chandrasekaran; Mummidisetty, Chaithanya Krishna; Mannix-Slobig, Alannah; McGee Koch, Lori; Jayaraman, Arun

    2018-03-13

    Monitoring physical activity and leveraging wearable sensor technologies to facilitate active living in individuals with neurological impairment has been shown to yield benefits in terms of health and quality of living. In this context, accurate measurement of physical activity estimates from these sensors are vital. However, wearable sensor manufacturers generally only provide standard proprietary algorithms based off of healthy individuals to estimate physical activity metrics which may lead to inaccurate estimates in population with neurological impairment like stroke and incomplete spinal cord injury (iSCI). The main objective of this cross-sectional investigation was to evaluate the validity of physical activity estimates provided by standard proprietary algorithms for individuals with stroke and iSCI. Two research grade wearable sensors used in clinical settings were chosen and the outcome metrics estimated using standard proprietary algorithms were validated against designated golden standard measures (Cosmed K4B2 for energy expenditure and metabolic equivalent and manual tallying for step counts). The influence of sensor location, sensor type and activity characteristics were also studied. 28 participants (Healthy (n = 10); incomplete SCI (n = 8); stroke (n = 10)) performed a spectrum of activities in a laboratory setting using two wearable sensors (ActiGraph and Metria-IH1) at different body locations. Manufacturer provided standard proprietary algorithms estimated the step count, energy expenditure (EE) and metabolic equivalent (MET). These estimates were compared with the estimates from gold standard measures. For verifying validity, a series of Kruskal Wallis ANOVA tests (Games-Howell multiple comparison for post-hoc analyses) were conducted to compare the mean rank and absolute agreement of outcome metrics estimated by each of the devices in comparison with the designated gold standard measurements. The sensor type, sensor location, activity characteristics and the population specific condition influences the validity of estimation of physical activity metrics using standard proprietary algorithms. Implementing population specific customized algorithms accounting for the influences of sensor location, type and activity characteristics for estimating physical activity metrics in individuals with stroke and iSCI could be beneficial.

  18. Evolutionary algorithms for the optimization of advective control of contaminated aquifer zones

    NASA Astrophysics Data System (ADS)

    Bayer, Peter; Finkel, Michael

    2004-06-01

    Simple genetic algorithms (SGAs) and derandomized evolution strategies (DESs) are employed to adapt well capture zones for the hydraulic optimization of pump-and-treat systems. A hypothetical contaminant site in a heterogeneous aquifer serves as an application template. On the basis of the results from numerical flow modeling, particle tracking is applied to delineate the pathways of the contaminants. The objective is to find the minimum pumping rate of up to eight recharge wells within a downgradient well placement area. Both the well coordinates and the pumping rates are subject to optimization, leading to a mixed discrete-continuous problem. This article discusses the ideal formulation of the objective function for which the number of particles and the total pumping rate are used as decision criteria. Boundary updating is introduced, which enables the reorganization of the decision space limits by the incorporation of experience from previous optimization runs. Throughout the study the algorithms' capabilities are evaluated in terms of the number of model runs which are needed to identify optimal and suboptimal solutions. Despite the complexity of the problem both evolutionary algorithm variants prove to be suitable for finding suboptimal solutions. The DES with weighted recombination reveals to be the ideal algorithm to find optimal solutions. Though it works with real-coded decision parameters, it proves to be suitable for adjusting discrete well positions. Principally, the representation of well positions as binary strings in the SGA is ideal. However, even if the SGA takes advantage of bookkeeping, the vital high discretization of pumping rates results in long binary strings, which escalates the model runs that are needed to find an optimal solution. Since the SGA string lengths increase with the number of wells, the DES gains superiority, particularly for an increasing number of wells. As the DES is a self-adaptive algorithm, it proves to be the more robust optimization method for the selected advective control problem than the SGA variants of this study, exhibiting a less stochastic search which is reflected in the minor variability of the found solutions.

  19. A New Tool for CME Arrival Time Prediction using Machine Learning Algorithms: CAT-PUMA

    NASA Astrophysics Data System (ADS)

    Liu, Jiajia; Ye, Yudong; Shen, Chenglong; Wang, Yuming; Erdélyi, Robert

    2018-03-01

    Coronal mass ejections (CMEs) are arguably the most violent eruptions in the solar system. CMEs can cause severe disturbances in interplanetary space and can even affect human activities in many aspects, causing damage to infrastructure and loss of revenue. Fast and accurate prediction of CME arrival time is vital to minimize the disruption that CMEs may cause when interacting with geospace. In this paper, we propose a new approach for partial-/full halo CME Arrival Time Prediction Using Machine learning Algorithms (CAT-PUMA). Via detailed analysis of the CME features and solar-wind parameters, we build a prediction engine taking advantage of 182 previously observed geo-effective partial-/full halo CMEs and using algorithms of the Support Vector Machine. We demonstrate that CAT-PUMA is accurate and fast. In particular, predictions made after applying CAT-PUMA to a test set unknown to the engine show a mean absolute prediction error of ∼5.9 hr within the CME arrival time, with 54% of the predictions having absolute errors less than 5.9 hr. Comparisons with other models reveal that CAT-PUMA has a more accurate prediction for 77% of the events investigated that can be carried out very quickly, i.e., within minutes of providing the necessary input parameters of a CME. A practical guide containing the CAT-PUMA engine and the source code of two examples are available in the Appendix, allowing the community to perform their own applications for prediction using CAT-PUMA.

  20. Cellular automata segmentation of the boundary between the compacta of vertebral bodies and surrounding structures

    NASA Astrophysics Data System (ADS)

    Egger, Jan; Nimsky, Christopher

    2016-03-01

    Due to the aging population, spinal diseases get more and more common nowadays; e.g., lifetime risk of osteoporotic fracture is 40% for white women and 13% for white men in the United States. Thus the numbers of surgical spinal procedures are also increasing with the aging population and precise diagnosis plays a vital role in reducing complication and recurrence of symptoms. Spinal imaging of vertebral column is a tedious process subjected to interpretation errors. In this contribution, we aim to reduce time and error for vertebral interpretation by applying and studying the GrowCut - algorithm for boundary segmentation between vertebral body compacta and surrounding structures. GrowCut is a competitive region growing algorithm using cellular automata. For our study, vertebral T2-weighted Magnetic Resonance Imaging (MRI) scans were first manually outlined by neurosurgeons. Then, the vertebral bodies were segmented in the medical images by a GrowCut-trained physician using the semi-automated GrowCut-algorithm. Afterwards, results of both segmentation processes were compared using the Dice Similarity Coefficient (DSC) and the Hausdorff Distance (HD) which yielded to a DSC of 82.99+/-5.03% and a HD of 18.91+/-7.2 voxel, respectively. In addition, the times have been measured during the manual and the GrowCut segmentations, showing that a GrowCutsegmentation - with an average time of less than six minutes (5.77+/-0.73) - is significantly shorter than a pure manual outlining.

  1. Power iteration ranking via hybrid diffusion for vital nodes identification

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Xian, Xingping; Zhong, Linfeng; Xiong, Xi; Stanley, H. Eugene

    2018-09-01

    One of the most interesting challenges in network science is to understand the relation between network structure and dynamics on it, and many topological properties, including degree distribution, community strength and clustering coefficient, have been proposed in the last decade. Prominent in this context is the centrality measures, which aim at quantifying the relative importance of individual nodes in the overall topology with regard to network organization and function. However, most of the previous centrality measures have been proposed based on different concepts and each of them focuses on a specific structural feature of networks. Thus, the straightforward and standard methods may lead to some bias against node importance measure. In this paper, we introduce two physical processes with potential complementarity between them. Then we propose to combine them as an elegant integration with the classic eigenvector centrality framework to improve the accuracy of node ranking. To test the produced power iteration ranking (PIRank) algorithm, we apply it to the selection of attack targets in network optimal attack problem. Extensive experimental results on synthetic networks and real-world networks suggest that the proposed centrality performs better than other well-known measures. Moreover, comparing with the eigenvector centrality, the PIRank algorithm can achieve about thirty percent performance improvement while keeping similar running time. Our experiment on random networks also shows that PIRank algorithm can avoid the localization phenomenon of eigenvector centrality, in particular for the networks with high-degree hubs.

  2. Formation of production structural units within a construction company using the systemic integrated method when implementing high-rise development projects

    NASA Astrophysics Data System (ADS)

    Lapidus, Azary; Abramov, Ivan

    2018-03-01

    Development of efficient algorithms for designing future operations is a vital element in construction business. This paper studies various aspects of a methodology required to determine the integration index for construction crews performing various process-related jobs. The main objective of the study outlined in this paper is to define the notion of integration in respect to a construction crew that performs complete cycles of construction and assembly works in order to find the optimal organizational solutions, using the integrated crew algorithm built specifically for that purpose. As seen in the sequence of algorithm elements, it was designed to focus on the key factors affecting the level of integration of a construction crew depending on the value of each of those elements. The multifactor modelling approach is used to assess the KPI of integrated construction crews involved in large-sale high-rise construction projects. The purpose of this study is to develop a theoretical recommendation and a scientific methodological provision of organizational and technological nature to ensure qualitative formation of integrated construction crews to increase their productivity during integrated implementation of multi-task construction phases. The key difference of the proposed solution from the already existing ones is that it requires identification of the degree of impact of each factor, including the change in the qualification level, on the integration index of each separate element in the organizational and technological system in construction (integrated construction crew).

  3. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  4. Learning oncogenetic networks by reducing to mixed integer linear programming.

    PubMed

    Shahrabi Farahani, Hossein; Lagergren, Jens

    2013-01-01

    Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.

  5. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  6. Modelling sea ice dynamics

    NASA Astrophysics Data System (ADS)

    Murawski, Jens; Kleine, Eckhard

    2017-04-01

    Sea ice remains one of the frontiers of ocean modelling and is of vital importance for the correct forecasts of the northern oceans. At large scale, it is commonly considered a continuous medium whose dynamics is modelled in terms of continuum mechanics. Its specifics are a matter of constitutive behaviour which may be characterised as rigid-plastic. The new developed sea ice dynamic module bases on general principles and follows a systematic approach to the problem. Both drift field and stress field are modelled by a variational property. Rigidity is treated by Lagrangian relaxation. Thus one is led to a sensible numerical method. Modelling fast ice remains to be a challenge. It is understood that ridging and the formation of grounded ice keels plays a role in the process. The ice dynamic model includes a parameterisation of the stress associated with grounded ice keels. Shear against the grounded bottom contact might lead to plastic deformation and the loss of integrity. The numerical scheme involves a potentially large system of linear equations which is solved by pre-conditioned iteration. The entire algorithm consists of several components which result from decomposing the problem. The algorithm has been implemented and tested in practice.

  7. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    PubMed

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Simultaneous, noninvasive, in vivo, continuous monitoring of hematocrit, vascular volume, hemoglobin oxygen saturation, pulse rate and breathing rate in humans and other animal models using a single light source

    NASA Astrophysics Data System (ADS)

    Dent, Paul; Tun, Sai Han; Fillioe, Seth; Deng, Bin; Satalin, Josh; Nieman, Gary; Wilcox, Kailyn; Searles, Quinn; Narsipur, Sri; Peterson, Charles M.; Goodisman, Jerry; Mostrom, James; Steinmann, Richard; Chaiken, J.

    2018-02-01

    We previously reported a new algorithm "PV[O]H" for continuous, noninvasive, in vivo monitoring of hematocrit changes in blood and have since shown its utility for monitoring in humans during 1) hemodialysis, 2) orthostatic perturbations and 3) during blood loss and fluid replacement in a rat model. We now show that the algorithm is sensitive to changes in hemoglobin oxygen saturation. We document the phenomenology of the effect and explain the effect using new results obtained from humans and rat models. The oxygen sensitivity derives from the differential absorption of autofluorescence originating in the static tissues by oxy and deoxy hemoglobin. Using this approach we show how to perform simultaneous, noninvasive, in vivo, continuous monitoring of hematocrit, vascular volume, hemoglobin oxygen saturation, pulse rate and breathing rate in mammals using a single light source. We suspect that monitoring of changes in this suite of vital signs can be provided with improved time response, sensitivity and precision compared to existing methodologies. Initial results also offer a more detailed glimpse into the systemic oxygen transport in the circulatory system of humans.

  9. Prediction of Drug-Plasma Protein Binding Using Artificial Intelligence Based Algorithms.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2018-01-01

    Plasma protein binding (PPB) has vital importance in the characterization of drug distribution in the systemic circulation. Unfavorable PPB can pose a negative effect on clinical development of promising drug candidates. The drug distribution properties should be considered at the initial phases of the drug design and development. Therefore, PPB prediction models are receiving an increased attention. In the current study, we present a systematic approach using Support vector machine, Artificial neural network, k- nearest neighbor, Probabilistic neural network, Partial least square and Linear discriminant analysis to relate various in vitro and in silico molecular descriptors to a diverse dataset of 736 drugs/drug-like compounds. The overall accuracy of Support vector machine with Radial basis function kernel came out to be comparatively better than the rest of the applied algorithms. The training set accuracy, validation set accuracy, precision, sensitivity, specificity and F1 score for the Suprort vector machine was found to be 89.73%, 89.97%, 92.56%, 87.26%, 91.97% and 0.898, respectively. This model can potentially be useful in screening of relevant drug candidates at the preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Relative optical navigation around small bodies via Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Law, Andrew M.

    To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.

  11. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  12. Vital-dye-enhanced multimodal imaging of neoplastic progression in a mouse model of oral carcinogenesis

    NASA Astrophysics Data System (ADS)

    Hellebust, Anne; Rosbach, Kelsey; Wu, Jessica Keren; Nguyen, Jennifer; Gillenwater, Ann; Vigneswaran, Nadarajah; Richards-Kortum, Rebecca

    2013-12-01

    In this longitudinal study, a mouse model of 4-nitroquinoline 1-oxide chemically induced tongue carcinogenesis was used to assess the ability of optical imaging with exogenous and endogenous contrast to detect neoplastic lesions in a heterogeneous mucosal surface. Widefield autofluorescence and fluorescence images of intact 2-NBDG-stained and proflavine-stained tissues were acquired at multiple time points in the carcinogenesis process. Confocal fluorescence images of transverse fresh tissue slices from the same specimens were acquired to investigate how changes in tissue microarchitecture affect widefield fluorescence images of intact tissue. Widefield images were analyzed to develop and evaluate an algorithm to delineate areas of dysplasia and cancer. A classification algorithm for the presence of neoplasia based on the mean fluorescence intensity of 2-NBDG staining and the standard deviation of the fluorescence intensity of proflavine staining was found to separate moderate dysplasia, severe dysplasia, and cancer from non-neoplastic regions of interest with 91% sensitivity and specificity. Results suggest this combination of noninvasive optical imaging modalities can be used in vivo to discriminate non-neoplastic from neoplastic tissue in this model with the potential to translate this technology to the clinic.

  13. Digital Self-Interference Cancellation for Asynchronous In-Band Full-Duplex Underwater Acoustic Communication.

    PubMed

    Qiao, Gang; Gan, Shuwei; Liu, Songzuo; Ma, Lu; Sun, Zongxin

    2018-05-24

    To improve the throughput of underwater acoustic (UWA) networking, the In-band full-duplex (IBFD) communication is one of the most vital pieces of research. The major drawback of IBFD-UWA communication is Self-Interference (SI). This paper presents a digital SI cancellation algorithm for asynchronous IBFD-UWA communication system. We focus on two issues: one is asynchronous communication dissimilar to IBFD radio communication, the other is nonlinear distortion caused by power amplifier (PA). First, we discuss asynchronous IBFD-UWA signal model with the nonlinear distortion of PA. Then, we design a scheme for asynchronous IBFD-UWA communication utilizing the non-overlapping region between SI and intended signal to estimate the nonlinear SI channel. To cancel the nonlinear distortion caused by PA, we propose an Over-Parameterization based Recursive Least Squares (RLS) algorithm (OPRLS) to estimate the nonlinear SI channel. Furthermore, we present the OPRLS with a sparse constraint to estimate the SI channel, which reduces the requirement of the length of the non-overlapping region. Finally, we verify our concept through simulation and the pool experiment. Results demonstrate that the proposed digital SI cancellation scheme can cancel SI efficiently.

  14. Overlapping Community Detection based on Network Decomposition

    NASA Astrophysics Data System (ADS)

    Ding, Zhuanlian; Zhang, Xingyi; Sun, Dengdi; Luo, Bin

    2016-04-01

    Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.

  15. Disaggregating census data for population mapping using random forests with remotely-sensed and ancillary data.

    PubMed

    Stevens, Forrest R; Gaughan, Andrea E; Linard, Catherine; Tatem, Andrew J

    2015-01-01

    High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, "Random Forest" estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America.

  16. Unsupervised Network Analysis of the Plastic Supraoptic Nucleus Transcriptome Predicts Caprin2 Regulatory Interactions.

    PubMed

    Loh, Su-Yi; Jahans-Price, Thomas; Greenwood, Michael P; Greenwood, Mingkwan; Hoe, See-Ziau; Konopacka, Agnieszka; Campbell, Colin; Murphy, David; Hindmarch, Charles C T

    2017-01-01

    The supraoptic nucleus (SON) is a group of neurons in the hypothalamus responsible for the synthesis and secretion of the peptide hormones vasopressin and oxytocin. Following physiological cues, such as dehydration, salt-loading and lactation, the SON undergoes a function related plasticity that we have previously described in the rat at the transcriptome level. Using the unsupervised graphical lasso (Glasso) algorithm, we reconstructed a putative network from 500 plastic SON genes in which genes are the nodes and the edges are the inferred interactions. The most active nodal gene identified within the network was Caprin2 . Caprin2 encodes an RNA-binding protein that we have previously shown to be vital for the functioning of osmoregulatory neuroendocrine neurons in the SON of the rat hypothalamus. To test the validity of the Glasso network, we either overexpressed or knocked down Caprin2 transcripts in differentiated rat pheochromocytoma PC12 cells and showed that these manipulations had significant opposite effects on the levels of putative target mRNAs. These studies suggest that the predicative power of the Glasso algorithm within an in vivo system is accurate, and identifies biological targets that may be important to the functional plasticity of the SON.

  17. Real-time implementation of an interactive jazz accompaniment system

    NASA Astrophysics Data System (ADS)

    Deshpande, Nikhil

    Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.

  18. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  19. Reducing uncertainty in wind turbine blade health inspection with image processing techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huiyi

    Structural health inspection has been widely applied in the operation of wind farms to find early cracks in wind turbine blades (WTBs). Increased numbers of turbines and expanded rotor diameters are driving up the workloads and safety risks for site employees. Therefore, it is important to automate the inspection process as well as minimize the uncertainties involved in routine blade health inspection. In addition, crack documentation and trending is vital to assess rotor blade and turbine reliability in the 20 year designed life span. A new crack recognition and classification algorithm is described that can support automated structural health inspection of the surface of large composite WTBs. The first part of the study investigated the feasibility of digital image processing in WTB health inspection and defined the capability of numerically detecting cracks as small as hairline thickness. The second part of the study identified and analyzed the uncertainty of the digital image processing method. A self-learning algorithm was proposed to recognize and classify cracks without comparing a blade image to a library of crack images. The last part of the research quantified the uncertainty in the field conditions and the image processing methods.

  20. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing.

    PubMed

    Koprowski, Robert

    2014-07-04

    Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.

  1. Cloud retrievals from satellite data using optimal estimation: evaluation and application to ATSR

    NASA Astrophysics Data System (ADS)

    Poulsen, C. A.; Siddans, R.; Thomas, G. E.; Sayer, A. M.; Grainger, R. G.; Campmany, E.; Dean, S. M.; Arnold, C.; Watts, P. D.

    2012-08-01

    Clouds play an important role in balancing the Earth's radiation budget. Hence, it is vital that cloud climatologies are produced that quantify cloud macro and micro physical parameters and the associated uncertainty. In this paper, we present an algorithm ORAC (Oxford-RAL retrieval of Aerosol and Cloud) which is based on fitting a physically consistent cloud model to satellite observations simultaneously from the visible to the mid-infrared, thereby ensuring that the resulting cloud properties provide both a good representation of the short-wave and long-wave radiative effects of the observed cloud. The advantages of the optimal estimation method are that it enables rigorous error propagation and the inclusion of all measurements and any a priori information and associated errors in a rigorous mathematical framework. The algorithm provides a measure of the consistency between retrieval representation of cloud and satellite radiances. The cloud parameters retrieved are the cloud top pressure, cloud optical depth, cloud effective radius, cloud fraction and cloud phase. The algorithm can be applied to most visible/infrared satellite instruments. In this paper, we demonstrate the applicability to the Along-Track Scanning Radiometers ATSR-2 and AATSR. Examples of applying the algorithm to ATSR-2 flight data are presented and the sensitivity of the retrievals assessed, in particular the algorithm is evaluated for a number of simulated single-layer and multi-layer conditions. The algorithm was found to perform well for single-layer cloud except when the cloud was very thin; i.e., less than 1 optical depths. For the multi-layer cloud, the algorithm was robust except when the upper ice cloud layer is less than five optical depths. In these cases the retrieved cloud top pressure and cloud effective radius become a weighted average of the 2 layers. The sum of optical depth of multi-layer cloud is retrieved well until the cloud becomes thick, greater than 50 optical depths, where the cloud begins to saturate. The cost proved a good indicator of multi-layer scenarios. Both the retrieval cost and the error need to be considered together in order to evaluate the quality of the retrieval. This algorithm in the configuration described here has been applied to both ATSR-2 and AATSR visible and infrared measurements in the context of the GRAPE (Global Retrieval and cloud Product Evaluation) project to produce a 14 yr consistent record for climate research.

  2. Response characteristics of the cat somatosensory cortex following the mechanical stimulation to non-vital and vital canine.

    PubMed

    Tao, Jianxiang; Wang, Duo; Ran, Jie; Jin, Anqi; Yu, Hongbo

    2017-11-05

    Patients sometimes complain that non-vital teeth after root canal treatment (RCT) are paresthesia compared with vital teeth, and previous psychological studies on the tactile sensibility of non-vital teeth remained controversial. In the present study, intrinsic signal optical imaging, which served as an objective tool, was employed to compare the cortex response characteristics following forces applied to the cat non-vital and vital canines. Based on the evoked cortical responses, the response threshold, signal strength, spatial pattern, temporal dynamics and the preference of force direction, they were not significantly different between vital and non-vital canines. It seemed that the tactile sensibility of vital and non-vital teeth was comparable at the cortical response level, and pulpal receptors were not concerned in tactile function. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Brightness-preserving fuzzy contrast enhancement scheme for the detection and classification of diabetic retinopathy disease.

    PubMed

    Datta, Niladri Sekhar; Dutta, Himadri Sekhar; Majumder, Koushik

    2016-01-01

    The contrast enhancement of retinal image plays a vital role for the detection of microaneurysms (MAs), which are an early sign of diabetic retinopathy disease. A retinal image contrast enhancement method has been presented to improve the MA detection technique. The success rate on low-contrast noisy retinal image analysis shows the importance of the proposed method. Overall, 587 retinal input images are tested for performance analysis. The average sensitivity and specificity are obtained as 95.94% and 99.21%, respectively. The area under curve is found as 0.932 for the receiver operating characteristics analysis. The classifications of diabetic retinopathy disease are also performed here. The experimental results show that the overall MA detection method performs better than the current state-of-the-art MA detection algorithms.

  4. Rapid Characterization of Magnetic Moment of Cells for Magnetic Separation

    PubMed Central

    Ooi, Chinchun; Earhart, Christopher M.; Wilson, Robert J.; Wang, Shan X.

    2014-01-01

    NCI-H1650 lung cancer cell lines labeled with magnetic nanoparticles via the Epithelial Cell Adhesion Molecule (EpCAM) antigen were previously shown to be captured at high efficiencies by a microfabricated magnetic sifter. If fine control and optimization of the magnetic separation process is to be achieved, it is vital to be able to characterize the labeled cells’ magnetic moment rapidly. We have thus adapted a rapid prototyping method to obtain the saturation magnetic moment of these cells. This method utilizes a cross-correlation algorithm to analyze the cells’ motion in a simple fluidic channel to obtain their magnetophoretic velocity, and is effective even when the magnetic moments of cells are small. This rapid characterization is proven useful in optimizing our microfabricated magnetic sifter procedures for magnetic cell capture. PMID:24771946

  5. [Development of transcutaneous jaundice predictor for the neonates].

    PubMed

    Zhu, Pengzhi; Yuan, Hengxin; Tan, Zhifeng; Zhu, Guoping; Yi, Yongju

    2011-06-01

    Neonatal jaundice is a common neonatal disease. Severe jaundices lead to kernicterus that affects intellectual development of infants or even causes death. Timely and early prediction is vital to the treatment and prevention. This paper presents a jaundice predictor, which uses C8051F020 as the core of single-chip microcomputer (SCM) system with prediction algorithms proven by a large number of clinical trials. The jaundice predictor can reduce the incidence rate of jaundice, alleviate the condition of infants with jaundice, improve the quality of perinatal, with predicting pathologic neonatal jaundice effectively and calling attention to the prophylactic treatment. In addition, compared with the existing transcutaneous jaundice meters, the new predictor has a smaller size, a lighter weight, more user-friendly, and easier to use by hand-holding.

  6. Dynamic biosignal management and transmission during telemedicine incidents handled by Mobile Units over diverse network types.

    PubMed

    Mandellos, George J; Koutelakis, George V; Panagiotakopoulos, Theodor C; Koukias, Andreas M; Koukias, Mixalis N; Lymberopoulos, Dimitrios K

    2008-01-01

    Early and specialized pre-hospital patient treatment improves outcome in terms of mortality and morbidity, in emergency cases. This paper focuses on the design and implementation of a telemedicine system that supports diverse types of endpoints including moving transports (MT) (ambulances, ships, planes, etc.), handheld devices and fixed units, using diverse communication networks. Target of the above telemedicine system is the pre-hospital patient treatment. While vital sign transmission is prior to other services provided by the telemedicine system (videoconference, remote management, voice calls etc.), a predefined algorithm controls provision and quality of the other services. A distributed database system controlled by a central server, aims to manage patient attributes, exams and incidents handled by different Telemedicine Coordination Centers (TCC).

  7. Trusted measurement model based on multitenant behaviors.

    PubMed

    Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng

    2014-01-01

    With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme.

  8. Trusted Measurement Model Based on Multitenant Behaviors

    PubMed Central

    Ning, Zhen-Hu; Shen, Chang-Xiang; Zhao, Yong; Liang, Peng

    2014-01-01

    With a fast growing pervasive computing, especially cloud computing, the behaviour measurement is at the core and plays a vital role. A new behaviour measurement tailored for Multitenants in cloud computing is needed urgently to fundamentally establish trust relationship. Based on our previous research, we propose an improved trust relationship scheme which captures the world of cloud computing where multitenants share the same physical computing platform. Here, we first present the related work on multitenant behaviour; secondly, we give the scheme of behaviour measurement where decoupling of multitenants is taken into account; thirdly, we explicitly explain our decoupling algorithm for multitenants; fourthly, we introduce a new way of similarity calculation for deviation control, which fits the coupled multitenants under study well; lastly, we design the experiments to test our scheme. PMID:24987731

  9. Vitality at work and its associations with lifestyle, self-determination, organizational culture, and with employees' performance and sustainable employability.

    PubMed

    van Scheppingen, Arjella R; de Vroome, Ernest M M; Ten Have, Kristin C J M; Zwetsloot, Gerard I J M; Wiezer, Noortje; van Mechelen, Willem

    2015-01-01

    Vitality at work is an important factor for optimal functioning and sustainable employability. To date, knowledge on how to promote vitality at work is fragmented. Contribute to knowledge on how to promote vitality at work. Determinants of vitality at work are identified from three scientific fields, and used in a comprehensive model. Regression analyses on cross-sectional data from a Dutch dairy company (N= 629) are performed to examine the associations between these factors, vitality at work, and employees' perceived effective personal functioning and sustainable employability. Vitality at work is most strongly associated with basic psychological needs of self-determination, but also with healthy lifestyle behavior, having a balanced workstyle, and social capital. Vitality at work is also associated with effective personal functioning and with sustainable employability. The study confirms the multifactorial nature of vitality at work. Since organizational culture may support self-determination, and cultural aspects themselves are positively associated with vitality, organizational culture seems particular important in promoting vitality at work. Additionally, a healthy lifestyle appears important. The associations between vitality at work and effective personal functioning and sustainable employability endorse the combined health-based, business-related and societal importance of vitality at work.

  10. Extracting rate coefficients from single-molecule photon trajectories and FRET efficiency histograms for a fast-folding protein.

    PubMed

    Chung, Hoi Sung; Gopich, Irina V; McHale, Kevin; Cellmer, Troy; Louis, John M; Eaton, William A

    2011-04-28

    Recently developed statistical methods by Gopich and Szabo were used to extract folding and unfolding rate coefficients from single-molecule Förster resonance energy transfer (FRET) data for proteins with kinetics too fast to measure waiting time distributions. Two types of experiments and two different analyses were performed. In one experiment bursts of photons were collected from donor and acceptor fluorophores attached to a 73-residue protein, α(3)D, freely diffusing through the illuminated volume of a confocal microscope system. In the second, the protein was immobilized by linkage to a surface, and photons were collected until one of the fluorophores bleached. Folding and unfolding rate coefficients and mean FRET efficiencies for the folded and unfolded subpopulations were obtained from a photon by photon analysis of the trajectories using a maximum likelihood method. The ability of the method to describe the data in terms of a two-state model was checked by recoloring the photon trajectories with the extracted parameters and comparing the calculated FRET efficiency histograms with the measured histograms. The sum of the rate coefficients for the two-state model agreed to within 30% with the relaxation rate obtained from the decay of the donor-acceptor cross-correlation function, confirming the high accuracy of the method. Interestingly, apparently reliable rate coefficients could be extracted using the maximum likelihood method, even at low (<10%) population of the minor component where the cross-correlation function was too noisy to obtain any useful information. The rate coefficients and mean FRET efficiencies were also obtained in an approximate procedure by simply fitting the FRET efficiency histograms, calculated by binning the donor and acceptor photons, with a sum of three-Gaussian functions. The kinetics are exposed in these histograms by the growth of a FRET efficiency peak at values intermediate between the folded and unfolded peaks as the bin size increases, a phenomenon with similarities to NMR exchange broadening. When comparable populations of folded and unfolded molecules are present, this method yields rate coefficients in very good agreement with those obtained with the maximum likelihood method. As a first step toward characterizing transition paths, the Viterbi algorithm was used to locate the most probable transition points in the photon trajectories.

  11. Recording signs of deterioration in acute patients: The documentation of vital signs within electronic health records in patients who suffered in-hospital cardiac arrest.

    PubMed

    Stevenson, Jean E; Israelsson, Johan; Nilsson, Gunilla C; Petersson, Göran I; Bath, Peter A

    2016-03-01

    Vital sign documentation is crucial to detecting patient deterioration. Little is known about the documentation of vital signs in electronic health records. This study aimed to examine documentation of vital signs in electronic health records. We examined the vital signs documented in the electronic health records of patients who had suffered an in-hospital cardiac arrest and on whom cardiopulmonary resuscitation was attempted between 2007 and 2011 (n = 228), in a 372-bed district general hospital. We assessed the completeness of vital sign data compared to VitalPAC™ Early Warning Score and the location of vital signs within the electronic health records. There was a noticeable lack of completeness of vital signs. Vital signs were fragmented through various sections of the electronic health records. The study identified serious shortfalls in the representation of vital signs in the electronic health records, with consequential threats to patient safety. © The Author(s) 2014.

  12. TraPy-MAC: Traffic Priority Aware Medium Access Control Protocol for Wireless Body Area Network.

    PubMed

    Ullah, Fasee; Abdullah, Abdul Hanan; Kaiwartya, Omprakash; Cao, Yue

    2017-06-01

    Recently, Wireless Body Area Network (WBAN) has witnessed significant attentions in research and product development due to the growing number of sensor-based applications in healthcare domain. Design of efficient and effective Medium Access Control (MAC) protocol is one of the fundamental research themes in WBAN. Static on-demand slot allocation to patient data is the main approach adopted in the design of MAC protocol in literature, without considering the type of patient data specifically the level of severity on patient data. This leads to the degradation of the performance of MAC protocols considering effectiveness and traffic adjustability in realistic medical environments. In this context, this paper proposes a Traffic Priority-Aware MAC (TraPy-MAC) protocol for WBAN. It classifies patient data into emergency and non-emergency categories based on the severity of patient data. The threshold value aided classification considers a number of parameters including type of sensor, body placement location, and data transmission time for allocating dedicated slots patient data. Emergency data are not required to carry out contention and slots are allocated by giving the due importance to threshold value of vital sign data. The contention for slots is made efficient in case of non-emergency data considering threshold value in slot allocation. Moreover, the slot allocation to emergency and non-emergency data are performed parallel resulting in performance gain in channel assignment. Two algorithms namely, Detection of Severity on Vital Sign data (DSVS), and ETS Slots allocation based on the Severity on Vital Sign (ETS-SVS) are developed for calculating threshold value and resolving the conflicts of channel assignment, respectively. Simulations are performed in ns2 and results are compared with the state-of-the-art MAC techniques. Analysis of results attests the benefit of TraPy-MAC in comparison with the state-of-the-art MAC in channel assignment in realistic medical environments.

  13. Farm Management Support on Cloud Computing Platform: A System for Cropland Monitoring Using Multi-Source Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Coburn, C. A.; Qin, Y.; Zhang, J.; Staenz, K.

    2015-12-01

    Food security is one of the most pressing issues facing humankind. Recent estimates predict that over one billion people don't have enough food to meet their basic nutritional needs. The ability of remote sensing tools to monitor and model crop production and predict crop yield is essential for providing governments and farmers with vital information to ensure food security. Google Earth Engine (GEE) is a cloud computing platform, which integrates storage and processing algorithms for massive remotely sensed imagery and vector data sets. By providing the capabilities of storing and analyzing the data sets, it provides an ideal platform for the development of advanced analytic tools for extracting key variables used in regional and national food security systems. With the high performance computing and storing capabilities of GEE, a cloud-computing based system for near real-time crop land monitoring was developed using multi-source remotely sensed data over large areas. The system is able to process and visualize the MODIS time series NDVI profile in conjunction with Landsat 8 image segmentation for crop monitoring. With multi-temporal Landsat 8 imagery, the crop fields are extracted using the image segmentation algorithm developed by Baatz et al.[1]. The MODIS time series NDVI data are modeled by TIMESAT [2], a software package developed for analyzing time series of satellite data. The seasonality of MODIS time series data, for example, the start date of the growing season, length of growing season, and NDVI peak at a field-level are obtained for evaluating the crop-growth conditions. The system fuses MODIS time series NDVI data and Landsat 8 imagery to provide information of near real-time crop-growth conditions through the visualization of MODIS NDVI time series and comparison of multi-year NDVI profiles. Stakeholders, i.e., farmers and government officers, are able to obtain crop-growth information at crop-field level online. This unique utilization of GEE in combination with advanced analytic and extraction techniques provides a vital remote sensing tool for decision makers and scientists with a high-degree of flexibility to adapt to different uses.

  14. Can Sophie's Choice Be Adequately Captured by Cold Computation of Minimizing Losses? An fMRI Study of Vital Loss Decisions

    PubMed Central

    Li, Qi; Qin, Shaozheng; Rao, Li-Lin; Zhang, Wencai; Ying, Xiaoping; Guo, Xiuyan; Guo, Chunyan; Ding, Jinghong; Li, Shu; Luo, Jing

    2011-01-01

    The vast majority of decision-making research is performed under the assumption of the value maximizing principle. This principle implies that when making decisions, individuals try to optimize outcomes on the basis of cold mathematical equations. However, decisions are emotion-laden rather than cool and analytic when they tap into life-threatening considerations. Using functional magnetic resonance imaging (fMRI), this study investigated the neural mechanisms underlying vital loss decisions. Participants were asked to make a forced choice between two losses across three conditions: both losses are trivial (trivial-trivial), both losses are vital (vital-vital), or one loss is trivial and the other is vital (vital-trivial). Our results revealed that the amygdala was more active and correlated positively with self-reported negative emotion associated with choice during vital-vital loss decisions, when compared to trivial-trivial loss decisions. The rostral anterior cingulate cortex was also more active and correlated positively with self-reported difficulty of choice during vital-vital loss decisions. Compared to the activity observed during trivial-trivial loss decisions, the orbitofrontal cortex and ventral striatum were more active and correlated positively with self-reported positive emotion of choice during vital-trivial loss decisions. Our findings suggest that vital loss decisions involve emotions and cannot be adequately captured by cold computation of minimizing losses. This research will shed light on how people make vital loss decisions. PMID:21412428

  15. Are prehospital airway management resources compatible with difficult airway algorithms? A nationwide cross-sectional study of helicopter emergency medical services in Japan.

    PubMed

    Ono, Yuko; Shinohara, Kazuaki; Goto, Aya; Yano, Tetsuhiro; Sato, Lubna; Miyazaki, Hiroyuki; Shimada, Jiro; Tase, Choichiro

    2016-04-01

    Immediate access to the equipment required for difficult airway management (DAM) is vital. However, in Japan, data are scarce regarding the availability of DAM resources in prehospital settings. The purpose of this study was to determine whether Japanese helicopter emergency medical services (HEMS) are adequately equipped to comply with the DAM algorithms of Japanese and American professional anesthesiology societies. This nationwide cross-sectional study was conducted in May 2015. Base hospitals of HEMS were mailed a questionnaire about their airway management equipment and back-up personnel. Outcome measures were (1) call for help, (2) supraglottic airway device (SGA) insertion, (3) verification of tube placement using capnometry, and (4) the establishment of surgical airways, all of which have been endorsed in various airway management guidelines. The criteria defining feasibility were the availability of (1) more than one physician, (2) SGA, (3) capnometry, and (4) a surgical airway device in the prehospital setting. Of the 45 HEMS base hospitals questioned, 42 (93.3 %) returned completed questionnaires. A surgical airway was practicable by all HEMS. However, in the prehospital setting, back-up assistance was available in 14.3 %, SGA in 16.7 %, and capnometry in 66.7 %. No HEMS was capable of all four steps. In Japan, compliance with standard airway management algorithms in prehospital settings remains difficult because of the limited availability of alternative ventilation equipment and back-up personnel. Prehospital health care providers need to consider the risks and benefits of performing endotracheal intubation in environments not conducive to the success of this procedure.

  16. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  17. SAM: String-based sequence search algorithm for mitochondrial DNA database queries

    PubMed Central

    Röck, Alexander; Irwin, Jodi; Dür, Arne; Parsons, Thomas; Parson, Walther

    2011-01-01

    The analysis of the haploid mitochondrial (mt) genome has numerous applications in forensic and population genetics, as well as in disease studies. Although mtDNA haplotypes are usually determined by sequencing, they are rarely reported as a nucleotide string. Traditionally they are presented in a difference-coded position-based format relative to the corrected version of the first sequenced mtDNA. This convention requires recommendations for standardized sequence alignment that is known to vary between scientific disciplines, even between laboratories. As a consequence, database searches that are vital for the interpretation of mtDNA data can suffer from biased results when query and database haplotypes are annotated differently. In the forensic context that would usually lead to underestimation of the absolute and relative frequencies. To address this issue we introduce SAM, a string-based search algorithm that converts query and database sequences to position-free nucleotide strings and thus eliminates the possibility that identical sequences will be missed in a database query. The mere application of a BLAST algorithm would not be a sufficient remedy as it uses a heuristic approach and does not address properties specific to mtDNA, such as phylogenetically stable but also rapidly evolving insertion and deletion events. The software presented here provides additional flexibility to incorporate phylogenetic data, site-specific mutation rates, and other biologically relevant information that would refine the interpretation of mitochondrial DNA data. The manuscript is accompanied by freeware and example data sets that can be used to evaluate the new software (http://stringvalidation.org). PMID:21056022

  18. Rigorous Characterisation of a Novel, Statistically-Based Ocean Colour Algorithm for the PACE Mission

    NASA Astrophysics Data System (ADS)

    Craig, S. E.; Lee, Z.; Du, K.; Lin, J.

    2016-02-01

    An approach based on empirical orthogonal function (EOF) analysis of ocean colour spectra has been shown to accurately derive inherent optical properties (IOPs) and chlorophyll concentration in scenarios, such as optically complex waters, where standard algorithms often perform poorly. The algorithm has been successfully used in a number of regional applications, and has also shown promise in a global implementation based on the NASA NOMAD data set. Additionally, it has demonstrated the unique ability to derive ocean colour products from top of atmosphere (TOA) signals with either no or minimal atmospheric correction applied. Due to its high potential for use over coastal and inland waters, the EOF approach is currently being rigorously characterised as part of a suite of approaches that will be used to support the new NASA ocean colour mission, PACE (Pre-Aerosol, Clouds and ocean Ecosystem). A major component in this model characterisation is the generation of a synthetic TOA data set using a coupled ocean-atmosphere radiative transfer model, which has been run to mimic PACE spectral resolution, and under a wide range of geographical locations, water constituent concentrations, and sea surface and atmospheric conditions. The resulting multidimensional data set will be analysed, and results presented on the sensitivity of the model to various combinations of parameters, and preliminary conclusions made regarding the optimal implementation strategy of this promising approach (e.g. on a global, optical water type or regional basis). This will provide vital guidance for operational implementation of the model for both existing satellite ocean colour sensors and the upcoming PACE mission.

  19. Rate-compatible punctured convolutional codes (RCPC codes) and their applications

    NASA Astrophysics Data System (ADS)

    Hagenauer, Joachim

    1988-04-01

    The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.

  20. Extracting duration information in a picture category decoding task using hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  1. Rarefied gas flow simulations using high-order gas-kinetic unified algorithms for Boltzmann model equations

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Hui; Peng, Ao-Ping; Zhang, Han-Xin; Yang, Jaw-Yen

    2015-04-01

    This article reviews rarefied gas flow computations based on nonlinear model Boltzmann equations using deterministic high-order gas-kinetic unified algorithms (GKUA) in phase space. The nonlinear Boltzmann model equations considered include the BGK model, the Shakhov model, the Ellipsoidal Statistical model and the Morse model. Several high-order gas-kinetic unified algorithms, which combine the discrete velocity ordinate method in velocity space and the compact high-order finite-difference schemes in physical space, are developed. The parallel strategies implemented with the accompanying algorithms are of equal importance. Accurate computations of rarefied gas flow problems using various kinetic models over wide ranges of Mach numbers 1.2-20 and Knudsen numbers 0.0001-5 are reported. The effects of different high resolution schemes on the flow resolution under the same discrete velocity ordinate method are studied. A conservative discrete velocity ordinate method to ensure the kinetic compatibility condition is also implemented. The present algorithms are tested for the one-dimensional unsteady shock-tube problems with various Knudsen numbers, the steady normal shock wave structures for different Mach numbers, the two-dimensional flows past a circular cylinder and a NACA 0012 airfoil to verify the present methodology and to simulate gas transport phenomena covering various flow regimes. Illustrations of large scale parallel computations of three-dimensional hypersonic rarefied flows over the reusable sphere-cone satellite and the re-entry spacecraft using almost the largest computer systems available in China are also reported. The present computed results are compared with the theoretical prediction from gas dynamics, related DSMC results, slip N-S solutions and experimental data, and good agreement can be found. The numerical experience indicates that although the direct model Boltzmann equation solver in phase space can be computationally expensive, nevertheless, the present GKUAs for kinetic model Boltzmann equations in conjunction with current available high-performance parallel computer power can provide a vital engineering tool for analyzing rarefied gas flows covering the whole range of flow regimes in aerospace engineering applications.

  2. Photoplethysmograph signal reconstruction based on a novel hybrid motion artifact detection-reduction approach. Part I: Motion and noise artifact detection.

    PubMed

    Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak

    2014-11-01

    Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the companion paper to this article. Finally, our MNA detection algorithm is real-time realizable as the computational speed on the 7-s PPG data segment was found to be only 7 ms with a Matlab code.

  3. Shadow Detection from Very High Resoluton Satellite Image Using Grabcut Segmentation and Ratio-Band Algorithms

    NASA Astrophysics Data System (ADS)

    Kadhim, N. M. S. M.; Mourshed, M.; Bray, M. T.

    2015-03-01

    Very-High-Resolution (VHR) satellite imagery is a powerful source of data for detecting and extracting information about urban constructions. Shadow in the VHR satellite imageries provides vital information on urban construction forms, illumination direction, and the spatial distribution of the objects that can help to further understanding of the built environment. However, to extract shadows, the automated detection of shadows from images must be accurate. This paper reviews current automatic approaches that have been used for shadow detection from VHR satellite images and comprises two main parts. In the first part, shadow concepts are presented in terms of shadow appearance in the VHR satellite imageries, current shadow detection methods, and the usefulness of shadow detection in urban environments. In the second part, we adopted two approaches which are considered current state-of-the-art shadow detection, and segmentation algorithms using WorldView-3 and Quickbird images. In the first approach, the ratios between the NIR and visible bands were computed on a pixel-by-pixel basis, which allows for disambiguation between shadows and dark objects. To obtain an accurate shadow candidate map, we further refine the shadow map after applying the ratio algorithm on the Quickbird image. The second selected approach is the GrabCut segmentation approach for examining its performance in detecting the shadow regions of urban objects using the true colour image from WorldView-3. Further refinement was applied to attain a segmented shadow map. Although the detection of shadow regions is a very difficult task when they are derived from a VHR satellite image that comprises a visible spectrum range (RGB true colour), the results demonstrate that the detection of shadow regions in the WorldView-3 image is a reasonable separation from other objects by applying the GrabCut algorithm. In addition, the derived shadow map from the Quickbird image indicates significant performance of the ratio algorithm. The differences in the characteristics of the two satellite imageries in terms of spatial and spectral resolution can play an important role in the estimation and detection of the shadow of urban objects.

  4. Hot and Cold Ethnicities: Modes of Ethnolinguistic Vitality

    ERIC Educational Resources Information Center

    Ehala, Martin

    2011-01-01

    The paper presents the summary of the special issue of "JMMD" "Ethnolinguistic vitality". The volume shows convincingly that ethnolinguistic vitality perceptions as measured by standard methodology such as the Subjective Ethnolinguistic Vitality Questionnaires (SEVQ) are not reliable indicators of actual vitality. Evidence that ethnolinguistic…

  5. Angular and Seasonal Variation of Spectral Surface Reflectance Ratios: Implications for the Remote Sensing of Aerosol over Land

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Wald, A. E.; Kaufman, Y. J.

    1999-01-01

    We obtain valuable information on the angular and seasonal variability of surface reflectance using a hand-held spectrometer from a light aircraft. The data is used to test a procedure that allows us to estimate visible surface reflectance from the longer wavelength 2.1 micrometer channel (mid-IR). Estimating or avoiding surface reflectance in the visible is a vital first step in most algorithms that retrieve aerosol optical thickness over land targets. The data indicate that specular reflection found when viewing targets from the forward direction can severely corrupt the relationships between the visible and 2.1 micrometer reflectance that were derived from nadir data. There is a month by month variation in the ratios between the visible and the mid-IR, weakly correlated to the Normalized Difference Vegetation Index (NDVI). If specular reflection is not avoided, the errors resulting from estimating surface reflectance from the mid-IR exceed the acceptable limit of DELTA-rho approximately 0.01 in roughly 40% of the cases, using the current algorithm. This is reduced to 25% of the cases if specular reflection is avoided. An alternative method that uses path radiance rather than explicitly estimating visible surface reflectance results in similar errors. The two methods have different strengths and weaknesses that require further study.

  6. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  7. Unsupervised Network Analysis of the Plastic Supraoptic Nucleus Transcriptome Predicts Caprin2 Regulatory Interactions

    PubMed Central

    Jahans-Price, Thomas; Greenwood, Michael P.; Greenwood, Mingkwan; Hoe, See-Ziau; Konopacka, Agnieszka

    2017-01-01

    Abstract The supraoptic nucleus (SON) is a group of neurons in the hypothalamus responsible for the synthesis and secretion of the peptide hormones vasopressin and oxytocin. Following physiological cues, such as dehydration, salt-loading and lactation, the SON undergoes a function related plasticity that we have previously described in the rat at the transcriptome level. Using the unsupervised graphical lasso (Glasso) algorithm, we reconstructed a putative network from 500 plastic SON genes in which genes are the nodes and the edges are the inferred interactions. The most active nodal gene identified within the network was Caprin2. Caprin2 encodes an RNA-binding protein that we have previously shown to be vital for the functioning of osmoregulatory neuroendocrine neurons in the SON of the rat hypothalamus. To test the validity of the Glasso network, we either overexpressed or knocked down Caprin2 transcripts in differentiated rat pheochromocytoma PC12 cells and showed that these manipulations had significant opposite effects on the levels of putative target mRNAs. These studies suggest that the predicative power of the Glasso algorithm within an in vivo system is accurate, and identifies biological targets that may be important to the functional plasticity of the SON. PMID:29279858

  8. Identification of mild cognitive impairment in ACTIVE: algorithmic classification and stability.

    PubMed

    Cook, Sarah E; Marsiske, Michael; Thomas, Kelsey R; Unverzagt, Frederick W; Wadley, Virginia G; Langbaum, Jessica B S; Crowe, Michael

    2013-01-01

    Rates of mild cognitive impairment (MCI) have varied substantially, depending on the criteria used and the samples surveyed. The present investigation used a psychometric algorithm for identifying MCI and its stability to determine if low cognitive functioning was related to poorer longitudinal outcomes. The Advanced Cognitive Training of Independent and Vital Elders (ACTIVE) study is a multi-site longitudinal investigation of long-term effects of cognitive training with older adults. ACTIVE exclusion criteria eliminated participants at highest risk for dementia (i.e., Mini-Mental State Examination < 23). Using composite normative for sample- and training-corrected psychometric data, 8.07% of the sample had amnestic impairment, while 25.09% had a non-amnestic impairment at baseline. Poorer baseline functional scores were observed in those with impairment at the first visit, including a higher rate of attrition, depressive symptoms, and self-reported physical functioning. Participants were then classified based upon the stability of their classification. Those who were stably impaired over the 5-year interval had the worst functional outcomes (e.g., Instrumental Activities of Daily Living performance), and inconsistency in classification over time also appeared to be associated increased risk. These findings suggest that there is prognostic value in assessing and tracking cognition to assist in identifying the critical baseline features associated with poorer outcomes.

  9. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    PubMed

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  10. Clustering Algorithms: Their Application to Gene Expression Data

    PubMed Central

    Oyelade, Jelili; Isewon, Itunuoluwa; Oladipupo, Funke; Aromolaran, Olufemi; Uwoghiren, Efosa; Ameh, Faridah; Achas, Moses; Adebiyi, Ezekiel

    2016-01-01

    Gene expression data hide vital information required to understand the biological process that takes place in a particular organism in relation to its environment. Deciphering the hidden patterns in gene expression data proffers a prodigious preference to strengthen the understanding of functional genomics. The complexity of biological networks and the volume of genes present increase the challenges of comprehending and interpretation of the resulting mass of data, which consists of millions of measurements; these data also inhibit vagueness, imprecision, and noise. Therefore, the use of clustering techniques is a first step toward addressing these challenges, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. The clustering of gene expression data has been proven to be useful in making known the natural structure inherent in gene expression data, understanding gene functions, cellular processes, and subtypes of cells, mining useful information from noisy data, and understanding gene regulation. The other benefit of clustering gene expression data is the identification of homology, which is very important in vaccine design. This review examines the various clustering algorithms applicable to the gene expression data in order to discover and provide useful knowledge of the appropriate clustering technique that will guarantee stability and high degree of accuracy in its analysis procedure. PMID:27932867

  11. Variant effect prediction tools assessed using independent, functional assay-based datasets: implications for discovery and diagnostics.

    PubMed

    Mahmood, Khalid; Jung, Chol-Hee; Philip, Gayle; Georgeson, Peter; Chung, Jessica; Pope, Bernard J; Park, Daniel J

    2017-05-16

    Genetic variant effect prediction algorithms are used extensively in clinical genomics and research to determine the likely consequences of amino acid substitutions on protein function. It is vital that we better understand their accuracies and limitations because published performance metrics are confounded by serious problems of circularity and error propagation. Here, we derive three independent, functionally determined human mutation datasets, UniFun, BRCA1-DMS and TP53-TA, and employ them, alongside previously described datasets, to assess the pre-eminent variant effect prediction tools. Apparent accuracies of variant effect prediction tools were influenced significantly by the benchmarking dataset. Benchmarking with the assay-determined datasets UniFun and BRCA1-DMS yielded areas under the receiver operating characteristic curves in the modest ranges of 0.52 to 0.63 and 0.54 to 0.75, respectively, considerably lower than observed for other, potentially more conflicted datasets. These results raise concerns about how such algorithms should be employed, particularly in a clinical setting. Contemporary variant effect prediction tools are unlikely to be as accurate at the general prediction of functional impacts on proteins as reported prior. Use of functional assay-based datasets that avoid prior dependencies promises to be valuable for the ongoing development and accurate benchmarking of such tools.

  12. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  13. Echogenicity based approach to detect, segment and track the common carotid artery in 2D ultrasound images.

    PubMed

    Narayan, Nikhil S; Marziliano, Pina

    2015-08-01

    Automatic detection and segmentation of the common carotid artery in transverse ultrasound (US) images of the thyroid gland play a vital role in the success of US guided intervention procedures. We propose in this paper a novel method to accurately detect, segment and track the carotid in 2D and 2D+t US images of the thyroid gland using concepts based on tissue echogenicity and ultrasound image formation. We first segment the hypoechoic anatomical regions of interest using local phase and energy in the input image. We then make use of a Hessian based blob like analysis to detect the carotid within the segmented hypoechoic regions. The carotid artery is segmented by making use of least squares ellipse fit for the edge points around the detected carotid candidate. Experiments performed on a multivendor dataset of 41 images show that the proposed algorithm can segment the carotid artery with high sensitivity (99.6 ±m 0.2%) and specificity (92.9 ±m 0.1%). Further experiments on a public database containing 971 images of the carotid artery showed that the proposed algorithm can achieve a detection accuracy of 95.2% with a 2% increase in performance when compared to the state-of-the-art method.

  14. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction

    PubMed Central

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian

    2017-01-01

    Abstract Motivation: Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. Results: We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Availability and Implementation: Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. Contact: deane@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28453681

  15. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.

    PubMed

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M

    2017-05-01

    Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  16. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  17. Hierarchical auto-configuration addressing in mobile ad hoc networks (HAAM)

    NASA Astrophysics Data System (ADS)

    Ram Srikumar, P.; Sumathy, S.

    2017-11-01

    Addressing plays a vital role in networking to identify devices uniquely. A device must be assigned with a unique address in order to participate in the data communication in any network. Different protocols defining different types of addressing are proposed in literature. Address auto-configuration is a key requirement for self organizing networks. Existing auto-configuration based addressing protocols require broadcasting probes to all the nodes in the network before assigning a proper address to a new node. This needs further broadcasts to reflect the status of the acquired address in the network. Such methods incur high communication overheads due to repetitive flooding. To address this overhead, a new partially stateful address allocation scheme, namely Hierarchical Auto-configuration Addressing (HAAM) scheme is extended and proposed. Hierarchical addressing basically reduces latency and overhead caused during address configuration. Partially stateful addressing algorithm assigns addresses without the need for flooding and global state awareness, which in turn reduces the communication overhead and space complexity respectively. Nodes are assigned addresses hierarchically to maintain the graph of the network as a spanning tree which helps in effectively avoiding the broadcast storm problem. Proposed algorithm for HAAM handles network splits and merges efficiently in large scale mobile ad hoc networks incurring low communication overheads.

  18. Disaggregating Census Data for Population Mapping Using Random Forests with Remotely-Sensed and Ancillary Data

    PubMed Central

    Stevens, Forrest R.; Gaughan, Andrea E.; Linard, Catherine; Tatem, Andrew J.

    2015-01-01

    High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, “Random Forest” estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America. PMID:25689585

  19. Validation of an Arab name algorithm in the determination of Arab ancestry for use in health research.

    PubMed

    El-Sayed, Abdulrahman M; Lauderdale, Diane S; Galea, Sandro

    2010-12-01

    Data about Arab-Americans, a growing ethnic minority, are not routinely collected in vital statistics, registry, or administrative data in the USA. The difficulty in identifying Arab-Americans using publicly available data sources is a barrier to health research about this group. Here, we validate an empirically based probabilistic Arab name algorithm (ANA) for identifying Arab-Americans in health research. We used data from all Michigan birth certificates between 2000 and 2005. Fathers' surnames and mothers' maiden names were coded as Arab or non-Arab according to the ANA. We calculated sensitivity, specificity, and positive (PPV) and negative predictive values (NPV) of Arab ethnicity inferred using the ANA as compared to self-reported Arab ancestry. Statewide, the ANA had a specificity of 98.9%, a sensitivity of 50.3%, a PPV of 57.0%, and an NPV of 98.6%. Both the false-positive and false-negative rates were higher among men than among women. As the concentration of Arab-Americans in a study locality increased, the ANA false-positive rate increased and false-negative rate decreased. The ANA is highly specific but only moderately sensitive as a means of detecting Arab ancestry. Future research should compare health characteristics among Arab-American populations defined by Arab ancestry and those defined by the ANA.

  20. Modeling flow and transport in fracture networks using graphs

    NASA Astrophysics Data System (ADS)

    Karra, S.; O'Malley, D.; Hyman, J. D.; Viswanathan, H. S.; Srinivasan, G.

    2018-03-01

    Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O (104) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.

Top