2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment
2011-02-01
code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise
Scalable Video Transmission Over Multi-Rate Multiple Access Channels
2007-06-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on
Design and System Implications of a Family of Wideband HF Data Waveforms
2010-09-01
code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was
A Video Transmission System for Severely Degraded Channels
2006-07-01
rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may
2007-06-01
17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB
Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization
2009-01-01
Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding
2006-12-01
Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with
System Design for FEC in Aeronautical Telemetry
2012-03-12
rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code
Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design
2011-02-01
Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the
2001-09-01
Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes
Combined coding and delay-throughput analysis for fading channels of mobile satellite communications
NASA Technical Reports Server (NTRS)
Wang, C. C.; Yan, Tsun-Yee
1986-01-01
This paper presents the analysis of using the punctured convolutional code with Viterbi decoding to improve communications reliability. The punctured code rate is optimized so that the average delay is minimized. The coding gain in terms of the message delay is also defined. Since using punctured convolutional code with interleaving is still inadequate to combat the severe fading for short packets, the use of multiple copies of assignment and acknowledgment packets is suggested. The performance on the average end-to-end delay of this protocol is analyzed. It is shown that a replication of three copies for both assignment packets and acknowledgment packets is optimum for the cases considered.
Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms
2007-09-01
punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
2006-06-01
called packet binary convolutional code (PBCC), was included as an option for performance at rate of either 5.5 or 11 Mpbs. The second offshoot...and the code rate is r k n= . A general convolutional encoder can be implemented with k shift-registers and n modulo-2 adders. Higher rates can be...derived from lower rate codes by employing “ puncturing .” Puncturing is a procedure for omitting some of the encoded bits in the transmitter (thus
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
NASA Technical Reports Server (NTRS)
Feria, Y.; Cheung, K.-M.
1995-01-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1994-10-01
In a time-varying signal-to-noise ratio (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate-change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
NASA Astrophysics Data System (ADS)
Feria, Y.; Cheung, K.-M.
1995-02-01
In a time-varying signal-to-noise ration (SNR) environment, symbol rate is often changed to maximize data return. However, the symbol-rate change has some undesirable effects, such as changing the transmission bandwidth and perhaps causing the receiver symbol loop to lose lock temporarily, thus losing some data. In this article, we are proposing an alternate way of varying the data rate without changing the symbol rate and, therefore, the transmission bandwidth. The data rate change is achieved in a seamless fashion by puncturing the convolutionally encoded symbol stream to adapt to the changing SNR environment. We have also derived an exact expression to enumerate the number of distinct puncturing patterns. To demonstrate this seamless rate change capability, we searched for good puncturing patterns for the Galileo (14,1/4) convolutional code and changed the data rates by using the punctured codes to match the Galileo SNR profile of November 9, 1997. We show that this scheme reduces the symbol-rate changes from nine to two and provides a comparable data return in a day and a higher symbol SNR during most of the day.
Protograph-Based Raptor-Like Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.
2014-01-01
Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
2006-08-25
interleaving schemes defined in 802.11a standard, although only 6 Mbps data rate with BPSK and 1/2 Convolutional coding and puncturing is used in our...16-QAM/64-QAM Convolutional Code K = 7 (64 states) K = 7 (64 states) Coding Rates 1/2, 2/3, 3/4 1/2, 2/3, 3/4 Channel Spacing (MHz) 20 10 Signal...Since 3G systems need to be backward compatible with 2G systems, they are a combination of existing and evolved equipments with data rate up to 2 Mbps
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
Progressive video coding for noisy channels
NASA Astrophysics Data System (ADS)
Kim, Beong-Jo; Xiong, Zixiang; Pearlman, William A.
1998-10-01
We extend the work of Sherwood and Zeger to progressive video coding for noisy channels. By utilizing a 3D extension of the set partitioning in hierarchical trees (SPIHT) algorithm, we cascade the resulting 3D SPIHT video coder with a rate-compatible punctured convolutional channel coder for transmission of video over a binary symmetric channel. Progressive coding is achieved by increasing the target rate of the 3D embedded SPIHT video coder as the channel condition improves. The performance of our proposed coding system is acceptable at low transmission rate and bad channel conditions. Its low complexity makes it suitable for emerging applications such as video over wireless channels.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Resource allocation for error resilient video coding over AWGN using optimization approach.
An, Cheolhong; Nguyen, Truong Q
2008-12-01
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
NASA Astrophysics Data System (ADS)
Kurceren, Ragip; Modestino, James W.
1998-12-01
The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.
Entanglement-assisted quantum convolutional coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilde, Mark M.; Brun, Todd A.
2010-04-15
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
NASA Astrophysics Data System (ADS)
Bezan, Scott; Shirani, Shahram
2006-12-01
To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.
1986-01-01
High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.
FEC combined burst-modem for business satellite communications use
NASA Astrophysics Data System (ADS)
Murakami, K.; Miyake, M.; Fuji, T.; Moritani, Y.; Fujino, T.
The authors recently developed two types of FEC (forward error correction) combined modems both applicable to low-data-rate and intermediate-data-rate TDMA international satellite communications. Each FEC combined modem consists of a QPSK (quadrature phase-shift keyed) modem, a convolutional encoder, and a Viterbi decoder. Both modems are designed taking into consideration the fast acquisition of the carrier and bit timing and the low cycle slipping rate in the low-carrier-to-noise-ratio environment. Attention is paid to designing the Viterbi decoder to be operated in a situation in which successive bursts may have different coding rates according to the punctured coding scheme. The overall scheme of the FEC combined modems are presented, and some of the key technologies applied in developing them are outlined. The hardware implementation and experimentation are also discussed. The measured data are compared with results of theoretical analysis, and relatively good performances are obtained.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2007-12-01
We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.
Coset Codes Viewed as Terminated Convolutional Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1996-01-01
In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1992-01-01
Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.
The general theory of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Stanley, R. P.
1993-01-01
This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?
Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D
2016-01-01
The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.
Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?
Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.
2017-01-01
Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.
1976-01-01
The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houshmand, Monireh; Hosseini-Khayat, Saied
2011-02-15
Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less
NASA Astrophysics Data System (ADS)
Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua
2016-08-01
We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.
Achieving unequal error protection with convolutional codes
NASA Technical Reports Server (NTRS)
Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.
1994-01-01
This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.
Coding performance of the Probe-Orbiter-Earth communication link
NASA Technical Reports Server (NTRS)
Divsalar, D.; Dolinar, S.; Pollara, F.
1993-01-01
The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1995-01-01
During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.
Layered video transmission over multirate DS-CDMA wireless systems
NASA Astrophysics Data System (ADS)
Kondi, Lisimachos P.; Srinivasan, Deepika; Pados, Dimitris A.; Batalama, Stella N.
2003-05-01
n this paper, we consider the transmission of video over wireless direct-sequence code-division multiple access (DS-CDMA) channels. A layered (scalable) video source codec is used and each layer is transmitted over a different CDMA channel. Spreading codes with different lengths are allowed for each CDMA channel (multirate CDMA). Thus, a different number of chips per bit can be used for the transmission of each scalable layer. For a given fixed energy value per chip and chip rate, the selection of a spreading code length affects the transmitted energy per bit and bit rate for each scalable layer. An MPEG-4 source encoder is used to provide a two-layer SNR scalable bitstream. Each of the two layers is channel-coded using Rate-Compatible Punctured Convolutional (RCPC) codes. Then, the data are interleaved, spread, carrier-modulated and transmitted over the wireless channel. A multipath Rayleigh fading channel is assumed. At the other end, we assume the presence of an antenna array receiver. After carrier demodulation, multiple-access-interference suppressing despreading is performed using space-time auxiliary vector (AV) filtering. The choice of the AV receiver is dictated by realistic channel fading rates that limit the data record available for receiver adaptation and redesign. Indeed, AV filter short-data-record estimators have been shown to exhibit superior bit-error-rate performance in comparison with LMS, RLS, SMI, or 'multistage nested Wiener' adaptive filter implementations. Our experimental results demonstrate the effectiveness of multirate DS-CDMA systems for wireless video transmission.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
Ghosh, Arindam; Lee, Jae-Won; Cho, Ho-Shin
2013-01-01
Due to its efficiency, reliability and better channel and resource utilization, cooperative transmission technologies have been attractive options in underwater as well as terrestrial sensor networks. Their performance can be further improved if merged with forward error correction (FEC) techniques. In this paper, we propose and analyze a retransmission protocol named Cooperative-Hybrid Automatic Repeat reQuest (C-HARQ) for underwater acoustic sensor networks, which exploits both the reliability of cooperative ARQ (CARQ) and the efficiency of incremental redundancy-hybrid ARQ (IR-HARQ) using rate-compatible punctured convolution (RCPC) codes. Extensive Monte Carlo simulations are performed to investigate the performance of the protocol, in terms of both throughput and energy efficiency. The results clearly reveal the enhancement in performance achieved by the C-HARQ protocol, which outperforms both CARQ and conventional stop and wait ARQ (S&W ARQ). Further, using computer simulations, optimum values of various network parameters are estimated so as to extract the best performance out of the C-HARQ protocol. PMID:24217359
Prioritized packet video transmission over time-varying wireless channel using proactive FEC
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-12-01
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
NASA Technical Reports Server (NTRS)
Doland, G. D.
1970-01-01
Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.
Convolutional coding results for the MVM '73 X-band telemetry experiment
NASA Technical Reports Server (NTRS)
Layland, J. W.
1978-01-01
Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.
Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.
Sequential Syndrome Decoding of Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.
An adaptive distributed data aggregation based on RCPC for wireless sensor networks
NASA Astrophysics Data System (ADS)
Hua, Guogang; Chen, Chang Wen
2006-05-01
One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks
VLSI single-chip (255,223) Reed-Solomon encoder with interleaver
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)
1990-01-01
The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Biometrics encryption combining palmprint with two-layer error correction codes
NASA Astrophysics Data System (ADS)
Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang
2017-07-01
To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.
On the error statistics of Viterbi decoding and the performance of concatenated codes
NASA Technical Reports Server (NTRS)
Miller, R. L.; Deutsch, L. J.; Butman, S. A.
1981-01-01
Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.
New coding advances for deep space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.
1987-01-01
Advances made in error-correction coding for deep space communications are described. The code believed to be the best is a (15, 1/6) convolutional code, with maximum likelihood decoding; when it is concatenated with a 10-bit Reed-Solomon code, it achieves a bit error rate of 10 to the -6th, at a bit SNR of 0.42 dB. This code outperforms the Voyager code by 2.11 dB. The use of source statics in decoding convolutionally encoded Voyager images from the Uranus encounter is investigated, and it is found that a 2 dB decoding gain can be achieved.
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Coded spread spectrum digital transmission system design study
NASA Technical Reports Server (NTRS)
Heller, J. A.; Odenwalder, J. P.; Viterbi, A. J.
1974-01-01
Results are presented of a comprehensive study of the performance of Viterbi-decoded convolutional codes in the presence of nonideal carrier tracking and bit synchronization. A constraint length 7, rate 1/3 convolutional code and parameters suitable for the space shuttle coded communications links are used. Mathematical models are developed and theoretical and simulation results are obtained to determine the tracking and acquisition performance of the system. Pseudorandom sequence spread spectrum techniques are also considered to minimize potential degradation caused by multipath.
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
Modulation and coding for satellite and space communications
NASA Technical Reports Server (NTRS)
Yuen, Joseph H.; Simon, Marvin K.; Pollara, Fabrizio; Divsalar, Dariush; Miller, Warner H.; Morakis, James C.; Ryan, Carl R.
1990-01-01
Several modulation and coding advances supported by NASA are summarized. To support long-constraint-length convolutional code, a VLSI maximum-likelihood decoder, utilizing parallel processing techniques, which is being developed to decode convolutional codes of constraint length 15 and a code rate as low as 1/6 is discussed. A VLSI high-speed 8-b Reed-Solomon decoder which is being developed for advanced tracking and data relay satellite (ATDRS) applications is discussed. A 300-Mb/s modem with continuous phase modulation (CPM) and codings which is being developed for ATDRS is discussed. Trellis-coded modulation (TCM) techniques are discussed for satellite-based mobile communication applications.
NASA Astrophysics Data System (ADS)
Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua
2017-02-01
Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.
Separable concatenated codes with iterative map decoding for Rician fading channels
NASA Technical Reports Server (NTRS)
Lodge, J. H.; Young, R. J.
1993-01-01
Very efficient signalling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. In this paper, powerful codes are obtained by combining comparatively simple convolutional codes to form multi-tiered 'separable' convolutional codes. The decoding of these codes, using separable symbol-by-symbol maximum a posteriori (MAP) 'filters', is described. It is known that this approach yields impressive results in non-fading additive white Gaussian noise channels. Interleaving is an inherent part of the code construction, and consequently, these codes are well suited for fading channel communications. Here, simulation results for communications over Rician fading channels are presented to support this claim.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.
Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong
Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.
DSN telemetry system performance with convolutionally code data
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.
1975-01-01
The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.
Annunziata, Roberto; Trucco, Emanuele
2016-11-01
Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
The VLSI design of an error-trellis syndrome decoder for certain convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.
1986-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
The VLSI design of error-trellis syndrome decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.
1985-01-01
A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.
Reliable video transmission over fading channels via channel state estimation
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-04-01
Transmission of continuous media such as video over time- varying wireless communication channels can benefit from the use of adaptation techniques in both source and channel coding. An adaptive feedback-based wireless video transmission scheme is investigated in this research with special emphasis on feedback-based adaptation. To be more specific, an interactive adaptive transmission scheme is developed by letting the receiver estimate the channel state information and send it back to the transmitter. By utilizing the feedback information, the transmitter is capable of adapting the level of protection by changing the flexible RCPC (rate-compatible punctured convolutional) code ratio depending on the instantaneous channel condition. The wireless channel is modeled as a fading channel, where the long-term and short- term fading effects are modeled as the log-normal fading and the Rayleigh flat fading, respectively. Then, its state (mainly the long term fading portion) is tracked and predicted by using an adaptive LMS (least mean squares) algorithm. By utilizing the delayed feedback on the channel condition, the adaptation performance of the proposed scheme is first evaluated in terms of the error probability and the throughput. It is then extended to incorporate variable size packets of ITU-T H.263+ video with the error resilience option. Finally, the end-to-end performance of wireless video transmission is compared against several non-adaptive protection schemes.
Performance of convolutional codes on fading channels typical of planetary entry missions
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.; Reale, T. J.
1974-01-01
The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance
NASA Technical Reports Server (NTRS)
Lee, L. N.
1975-01-01
It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems.
Truncation Depth Rule-of-Thumb for Convolutional Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce
2009-01-01
In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.
A long constraint length VLSI Viterbi decoder for the DSN
NASA Technical Reports Server (NTRS)
Statman, J. I.; Zimmerman, G.; Pollara, F.; Collins, O.
1988-01-01
A Viterbi decoder, capable of decoding convolutional codes with constraint lengths up to 15, is under development for the Deep Space Network (DSN). The objective is to complete a prototype of this decoder by late 1990, and demonstrate its performance using the (15, 1/4) encoder in Galileo. The decoder is expected to provide 1 to 2 dB improvement in bit SNR, compared to the present (7, 1/2) code and existing Maximum Likelihood Convolutional Decoder (MCD). The decoder will be fully programmable for any code up to constraint length 15, and code rate 1/2 to 1/6. The decoder architecture and top-level design are described.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
Improved Iterative Decoding of Network-Channel Codes for Multiple-Access Relay Channel.
Majumder, Saikat; Verma, Shrish
2015-01-01
Cooperative communication using relay nodes is one of the most effective means of exploiting space diversity for low cost nodes in wireless network. In cooperative communication, users, besides communicating their own information, also relay the information of other users. In this paper we investigate a scheme where cooperation is achieved using a common relay node which performs network coding to provide space diversity for two information nodes transmitting to a base station. We propose a scheme which uses Reed-Solomon error correcting code for encoding the information bit at the user nodes and convolutional code as network code, instead of XOR based network coding. Based on this encoder, we propose iterative soft decoding of joint network-channel code by treating it as a concatenated Reed-Solomon convolutional code. Simulation results show significant improvement in performance compared to existing scheme based on compound codes.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1993-01-01
The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
Multiple-Symbol Noncoherent Decoding of Uncoded and Convolutionally Codes Continous Phase Modulation
NASA Technical Reports Server (NTRS)
Divsalar, D.; Raphaeli, D.
2000-01-01
Recently, a method for combined noncoherent detection and decoding of trellis-codes (noncoherent coded modulation) has been proposed, which can practically approach the performance of coherent detection.
Taylor, Jennifer A; Gerwin, Daniel; Morlock, Laura; Miller, Marlene R
2011-12-01
To evaluate the need for triangulating case-finding tools in patient safety surveillance. This study applied four case-finding tools to error-associated patient safety events to identify and characterise the spectrum of events captured by these tools, using puncture or laceration as an example for in-depth analysis. Retrospective hospital discharge data were collected for calendar year 2005 (n=48,418) from a large, urban medical centre in the USA. The study design was cross-sectional and used data linkage to identify the cases captured by each of four case-finding tools. Three case-finding tools (International Classification of Diseases external (E) and nature (N) of injury codes, Patient Safety Indicators (PSI)) were applied to the administrative discharge data to identify potential patient safety events. The fourth tool was Patient Safety Net, a web-based voluntary patient safety event reporting system. The degree of mutual exclusion among detection methods was substantial. For example, when linking puncture or laceration on unique identifiers, out of 447 potential events, 118 were identical between PSI and E-codes, 152 were identical between N-codes and E-codes and 188 were identical between PSI and N-codes. Only 100 events that were identified by PSI, E-codes and N-codes were identical. Triangulation of multiple tools through data linkage captures potential patient safety events most comprehensively. Existing detection tools target patient safety domains differently, and consequently capture different occurrences, necessitating the integration of data from a combination of tools to fully estimate the total burden.
Boyd, Andrew D; Li, Jianrong John; Kenost, Colleen; Joese, Binoy; Yang, Young Min; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A
2015-05-01
In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as "convoluted" by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: "identity" (reciprocal), "class-to-subclass," "subclass-to-class," "convoluted," or "no mapping." These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible.Web portal: http://www.lussierlab.org/transition-to-ICD9CM/Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Adaptive decoding of convolutional codes
NASA Astrophysics Data System (ADS)
Hueske, K.; Geldmacher, J.; Götze, J.
2007-06-01
Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
The use of interleaving for reducing radio loss in convolutionally coded systems
NASA Technical Reports Server (NTRS)
Divsalar, D.; Simon, M. K.; Yuen, J. H.
1989-01-01
The use of interleaving after convolutional coding and deinterleaving before Viterbi decoding is proposed. This effectively reduces radio loss at low-loop Signal to Noise Ratios (SNRs) by several decibels and at high-loop SNRs by a few tenths of a decibel. Performance of the coded system can further be enhanced if the modulation index is optimized for this system. This will correspond to a reduction of bit SNR at a certain bit error rate for the overall system. The introduction of interleaving/deinterleaving into communication systems designed for future deep space missions does not substantially complicate their hardware design or increase their system cost.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Turbo Trellis Coded Modulation With Iterative Decoding for Mobile Satellite Communications
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1997-01-01
In this paper, analytical bounds on the performance of parallel concatenation of two codes, known as turbo codes, and serial concatenation of two codes over fading channels are obtained. Based on this analysis, design criteria for the selection of component trellis codes for MPSK modulation, and a suitable bit-by-bit iterative decoding structure are proposed. Examples are given for throughput of 2 bits/sec/Hz with 8PSK modulation. The parallel concatenation example uses two rate 4/5 8-state convolutional codes with two interleavers. The convolutional codes' outputs are then mapped to two 8PSK modulations. The serial concatenated code example uses an 8-state outer code with rate 4/5 and a 4-state inner trellis code with 5 inputs and 2 x 8PSK outputs per trellis branch. Based on the above mentioned design criteria for fading channels, a method to obtain he structure of the trellis code with maximum diversity is proposed. Simulation results are given for AWGN and an independent Rayleigh fading channel with perfect Channel State Information (CSI).
Convolutional coding at 50 Mbps for the Shuttle Ku-band return link
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.
1976-01-01
Error correcting coding is required for 50 Mbps data link from the Shuttle Orbiter through the Tracking and Data Relay Satellite System (TDRSS) to the ground because of severe power limitations. Convolutional coding has been chosen because the decoding algorithms (sequential and Viterbi) provide significant coding gains at the required bit error probability of one in 10 to the sixth power and can be implemented at 50 Mbps with moderate hardware. While a 50 Mbps sequential decoder has been built, the highest data rate achieved for a Viterbi decoder is 10 Mbps. Thus, five multiplexed 10 Mbps Viterbi decoders must be used to provide a 50 Mbps data rate. This paper discusses the tradeoffs which were considered when selecting the multiplexed Viterbi decoder approach for this application.
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
Factorising the 3D topologically twisted index
NASA Astrophysics Data System (ADS)
Cabo-Bizet, Alejandro
2017-04-01
We explore the path integration — upon the contour of hermitian (non-auxliary) field configurations — of topologically twisted N=2 Chern-Simons-matter theory (TTCSM) on {S}_2 times a segment. In this way, we obtain the formula for the 3D topologically twisted index, first as a convolution of TTCSM on {S}_2 times halves of {S}_1 , second as TTCSM on {S}_2 times {S}_1 — with a puncture, — and third as TTCSM on {S}_2× {S}_1 . In contradistinction to the first two cases, in the third case, the vector multiplet auxiliary field D is constrained to be anti-hermitian.
Liu, Shuo; Cui, Tie Jun; Zhang, Lei; Xu, Quan; Wang, Qiu; Wan, Xiang; Gu, Jian Qiang; Tang, Wen Xuan; Qing Qi, Mei; Han, Jia Guang; Zhang, Wei Li; Zhou, Xiao Yang; Cheng, Qiang
2016-10-01
The concept of coding metasurface makes a link between physically metamaterial particles and digital codes, and hence it is possible to perform digital signal processing on the coding metasurface to realize unusual physical phenomena. Here, this study presents to perform Fourier operations on coding metasurfaces and proposes a principle called as scattering-pattern shift using the convolution theorem, which allows steering of the scattering pattern to an arbitrarily predesigned direction. Owing to the constant reflection amplitude of coding particles, the required coding pattern can be simply achieved by the modulus of two coding matrices. This study demonstrates that the scattering patterns that are directly calculated from the coding pattern using the Fourier transform have excellent agreements to the numerical simulations based on realistic coding structures, providing an efficient method in optimizing coding patterns to achieve predesigned scattering beams. The most important advantage of this approach over the previous schemes in producing anomalous single-beam scattering is its flexible and continuous controls to arbitrary directions. This work opens a new route to study metamaterial from a fully digital perspective, predicting the possibility of combining conventional theorems in digital signal processing with the coding metasurface to realize more powerful manipulations of electromagnetic waves.
NASA Technical Reports Server (NTRS)
Silva, Walter A.
1993-01-01
A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.
Coding/modulation trade-offs for Shuttle wideband data links
NASA Technical Reports Server (NTRS)
Batson, B. H.; Huth, G. K.; Trumpis, B. D.
1974-01-01
This paper describes various modulation and coding schemes which are potentially applicable to the Shuttle wideband data relay communications link. This link will be capable of accommodating up to 50 Mbps of scientific data and will be subject to a power constraint which forces the use of channel coding. Although convolutionally encoded coherent binary PSK is the tentative signal design choice for the wideband data relay link, FM techniques are of interest because of the associated hardware simplicity and because an FM system is already planned to be available for transmission of television via relay satellite to the ground. Binary and M-ary FSK are considered as candidate modulation techniques, and both coherent and noncoherent ground station detection schemes are examined. The potential use of convolutional coding is considered in conjunction with each of the candidate modulation techniques.
High-speed architecture for the decoding of trellis-coded modulation
NASA Technical Reports Server (NTRS)
Osborne, William P.
1992-01-01
Since 1971, when the Viterbi Algorithm was introduced as the optimal method of decoding convolutional codes, improvements in circuit technology, especially VLSI, have steadily increased its speed and practicality. Trellis-Coded Modulation (TCM) combines convolutional coding with higher level modulation (non-binary source alphabet) to provide forward error correction and spectral efficiency. For binary codes, the current stare-of-the-art is a 64-state Viterbi decoder on a single CMOS chip, operating at a data rate of 25 Mbps. Recently, there has been an interest in increasing the speed of the Viterbi Algorithm by improving the decoder architecture, or by reducing the algorithm itself. Designs employing new architectural techniques are now in existence, however these techniques are currently applied to simpler binary codes, not to TCM. The purpose of this report is to discuss TCM architectural considerations in general, and to present the design, at the logic gate level, or a specific TCM decoder which applies these considerations to achieve high-speed decoding.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Boyd, Andrew D; ‘John’ Li, Jianrong; Kenost, Colleen; Joese, Binoy; Min Yang, Young; Kalagidis, Olympia A; Zenku, Ilir; Saner, Donald; Bahroos, Neil; Lussier, Yves A
2015-01-01
In the United States, International Classification of Disease Clinical Modification (ICD-9-CM, the ninth revision) diagnosis codes are commonly used to identify patient cohorts and to conduct financial analyses related to disease. In October 2015, the healthcare system of the United States will transition to ICD-10-CM (the tenth revision) diagnosis codes. One challenge posed to clinical researchers and other analysts is conducting diagnosis-related queries across datasets containing both coding schemes. Further, healthcare administrators will manage growth, trends, and strategic planning with these dually-coded datasets. The majority of the ICD-9-CM to ICD-10-CM translations are complex and nonreciprocal, creating convoluted representations and meanings. Similarly, mapping back from ICD-10-CM to ICD-9-CM is equally complex, yet different from mapping forward, as relationships are likewise nonreciprocal. Indeed, 10 of the 21 top clinical categories are complex as 78% of their diagnosis codes are labeled as “convoluted” by our analyses. Analysis and research related to external causes of morbidity, injury, and poisoning will face the greatest challenges due to 41 745 (90%) convolutions and a decrease in the number of codes. We created a web portal tool and translation tables to list all ICD-9-CM diagnosis codes related to the specific input of ICD-10-CM diagnosis codes and their level of complexity: “identity” (reciprocal), “class-to-subclass,” “subclass-to-class,” “convoluted,” or “no mapping.” These tools provide guidance on ambiguous and complex translations to reveal where reports or analyses may be challenging to impossible. Web portal: http://www.lussierlab.org/transition-to-ICD9CM/ Tables annotated with levels of translation complexity: http://www.lussierlab.org/publications/ICD10to9 PMID:25681260
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Computer algorithm for coding gain
NASA Technical Reports Server (NTRS)
Dodd, E. E.
1974-01-01
Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.
Advanced coding and modulation schemes for TDRSS
NASA Technical Reports Server (NTRS)
Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan
1993-01-01
This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.
Suitability of Exoseal Vascular Closure Device for Antegrade Femoral Artery Puncture Site Closure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmelter, Christopher, E-mail: christopher.schmelter@klinikum-ingolstadt.de; Liebl, Andrea; Poullos, Nektarios
Purpose. To assess the efficacy and safety of the Exoseal vascular closure device for antegrade puncture of the femoral artery. Methods. In a prospective study from February 2011 to January 2012, a total of 93 consecutive patients received a total of 100 interventional procedures via an antegrade puncture of the femoral artery. An Exoseal vascular closure device (6F) was used for closure in all cases. Puncture technique, duration of manual compression, and use of compression bandages were documented. All patients were monitored by vascular ultrasound and color-coded duplex sonography of their respective femoral artery puncture site within 12 to 36more » h after angiography to check for vascular complications. Results. In 100 antegrade interventional procedures, the Exoseal vascular closure device was applied successfully for closure of the femoral artery puncture site in 96 cases (96 of 100, 96.0 %). The vascular closure device could not be deployed in one case as a result of kinking of the vascular sheath introducer and in three cases because the bioabsorbable plug was not properly delivered to the extravascular space adjacent to the arterial puncture site, but instead fully removed with the delivery system (4.0 %). Twelve to 36 h after the procedure, vascular ultrasound revealed no complications at the femoral artery puncture site in 93 cases (93.0 %). Minor vascular complications were found in seven cases (7.0 %), with four cases (4.0 %) of pseudoaneurysm and three cases (3.0 %) of significant late bleeding, none of which required surgery. Conclusion. The Exoseal vascular closure device was safely used for antegrade puncture of the femoral artery, with a high rate of procedural success (96.0 %), a low rate of minor vascular complications (7.0 %), and no major adverse events.« less
Capacity, cutoff rate, and coding for a direct-detection optical channel
NASA Technical Reports Server (NTRS)
Massey, J. L.
1980-01-01
It is shown that Pierce's pulse position modulation scheme with 2 to the L pulse positions used on a self-noise-limited direct detection optical communication channel results in a 2 to the L-ary erasure channel that is equivalent to the parallel combination of L completely correlated binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint-length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.
Coding gains and error rates from the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.
DSN telemetry system performance using a maximum likelihood convolutional decoder
NASA Technical Reports Server (NTRS)
Benjauthrit, B.; Kemp, R. P.
1977-01-01
Results are described of telemetry system performance testing using DSN equipment and a Maximum Likelihood Convolutional Decoder (MCD) for code rates 1/2 and 1/3, constraint length 7 and special test software. The test results confirm the superiority of the rate 1/3 over that of the rate 1/2. The overall system performance losses determined at the output of the Symbol Synchronizer Assembly are less than 0.5 db for both code rates. Comparison of the performance is also made with existing mathematical models. Error statistics of the decoded data are examined. The MCD operational threshold is found to be about 1.96 db.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
A method of estimating GPS instrumental biases with a convolution algorithm
NASA Astrophysics Data System (ADS)
Li, Qi; Ma, Guanyi; Lu, Weijun; Wan, Qingtao; Fan, Jiangtao; Wang, Xiaolan; Li, Jinghua; Li, Changhua
2018-03-01
This paper presents a method of deriving the instrumental differential code biases (DCBs) of GPS satellites and dual frequency receivers. Considering that the total electron content (TEC) varies smoothly over a small area, one ionospheric pierce point (IPP) and four more nearby IPPs were selected to build an equation with a convolution algorithm. In addition, unknown DCB parameters were arranged into a set of equations with GPS observations in a day unit by assuming that DCBs do not vary within a day. Then, the DCBs of satellites and receivers were determined by solving the equation set with the least-squares fitting technique. The performance of this method is examined by applying it to 361 days in 2014 using the observation data from 1311 GPS Earth Observation Network (GEONET) receivers. The result was crosswise-compared with the DCB estimated by the mesh method and the IONEX products from the Center for Orbit Determination in Europe (CODE). The DCB values derived by this method agree with those of the mesh method and the CODE products, with biases of 0.091 ns and 0.321 ns, respectively. The convolution method's accuracy and stability were quite good and showed improvements over the mesh method.
Throughput Optimization Via Adaptive MIMO Communications
2006-05-30
End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols
NASA Technical Reports Server (NTRS)
Divsalar, D.; Pollara, F.
1995-01-01
In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.
Frame synchronization for the Galileo code
NASA Technical Reports Server (NTRS)
Arnold, S.; Swanson, L.
1991-01-01
Results are reported on the performance of the Deep Space Network's frame synchronizer for the (15,1/4) convolutional code after Viterbi decoding. The threshold is found that optimizes the probability of acquiring true sync within four frames using a strategy that requires next frame verification.
ANNA: A Convolutional Neural Network Code for Spectroscopic Analysis
NASA Astrophysics Data System (ADS)
Lee-Brown, Donald; Anthony-Twarog, Barbara J.; Twarog, Bruce A.
2018-01-01
We present ANNA, a Python-based convolutional neural network code for the automated analysis of stellar spectra. ANNA provides a flexible framework that allows atmospheric parameters such as temperature and metallicity to be determined with accuracies comparable to those of established but less efficient techniques. ANNA performs its parameterization extremely quickly; typically several thousand spectra can be analyzed in less than a second. Additionally, the code incorporates features which greatly speed up the training process necessary for the neural network to measure spectra accurately, resulting in a tool that can easily be run on a single desktop or laptop computer. Thus, ANNA is useful in an era when spectrographs increasingly have the capability to collect dozens to hundreds of spectra each night. This talk will cover the basic features included in ANNA and demonstrate its performance in two use cases: an open cluster abundance analysis involving several hundred spectra, and a metal-rich field star study. Applicability of the code to large survey datasets will also be discussed.
ICD-10 procedure codes produce transition challenges.
Boyd, Andrew D; Li, Jianrong 'John'; Kenost, Colleen; Zaim, Samir Rachid; Krive, Jacob; Mittal, Manish; Satava, Richard A; Burton, Michael; Smith, Jacob; Lussier, Yves A
2018-01-01
The transition of procedure coding from ICD-9-CM-Vol-3 to ICD-10-PCS has generated problems for the medical community at large resulting from the lack of clarity required to integrate two non-congruent coding systems. We hypothesized that quantifying these issues with network topology analyses offers a better understanding of the issues, and therefore we developed solutions (online tools) to empower hospital administrators and researchers to address these challenges. Five topologies were identified: "identity"(I), "class-to-subclass"(C2S), "subclass-toclass"(S2C), "convoluted(C)", and "no mapping"(NM). The procedure codes in the 2010 Illinois Medicaid dataset (3,290 patients, 116 institutions) were categorized as C=55%, C2S=40%, I=3%, NM=2%, and S2C=1%. Majority of the problematic and ambiguous mappings (convoluted) pertained to operations in ophthalmology cardiology, urology, gyneco-obstetrics, and dermatology. Finally, the algorithms were expanded into a user-friendly tool to identify problematic topologies and specify lists of procedural codes utilized by medical professionals and researchers for mitigating error-prone translations, simplifying research, and improving quality.http://www.lussiergroup.org/transition-to-ICD10PCS.
Gallmeier, F. X.; Iverson, E. B.; Lu, W.; ...
2016-01-08
Neutron transport simulation codes are an indispensable tool used for the design and construction of modern neutron scattering facilities and instrumentation. It has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modelled by the existing codes. Particularly, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4 and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential ingredients for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX codemore » to include a single-crystal neutron scattering model and neutron reflection/refraction physics. Furthermore, we have also generated silicon scattering kernels for single crystals of definable orientation with respect to an incoming neutron beam. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal s Bragg cut off at locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon/void layers. Finally the convoluted moderator experiments described by Iverson et al. were simulated and we find satisfactory agreement between the measurement and the results of simulations performed using the tools we have developed.« less
DeepMoon: Convolutional neural network trainer to identify moon craters
NASA Astrophysics Data System (ADS)
Silburt, Ari; Zhu, Chenchong; Ali-Dib, Mohamad; Menou, Kristen; Jackson, Alan
2018-05-01
DeepMoon trains a convolutional neural net using data derived from a global digital elevation map (DEM) and catalog of craters to recognize craters on the Moon. The TensorFlow-based pipeline code is divided into three parts. The first generates a set images of the Moon randomly cropped from the DEM, with corresponding crater positions and radii. The second trains a convnet using this data, and the third validates the convnet's predictions.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Patient Safety Center Organization
2007-06-01
placement Medicine, Surgery Lumbar puncture* Medicine Thoracentesis* Medicine Shoulder dystocia Obstetrics & Gynecology Mock code-depressed newborn...Airway 2) Team Training (using SimMan), 3) Endoscopy, 4) Shoulder Dystocia , 5) Episiotomy, and 6) Central Line Placement. The second group is
Introduction to Forward-Error-Correcting Coding
NASA Technical Reports Server (NTRS)
Freeman, Jon C.
1996-01-01
This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.
Modulation and coding for fast fading mobile satellite communication channels
NASA Technical Reports Server (NTRS)
Mclane, P. J.; Wittke, P. H.; Smith, W. S.; Lee, A.; Ho, P. K. M.; Loo, C.
1988-01-01
The performance of Gaussian baseband filtered minimum shift keying (GMSK) using differential detection in fast Rician fading, with a novel treatment of the inherent intersymbol interference (ISI) leading to an exact solution is discussed. Trellis-coded differentially coded phase shift keying (DPSK) with a convolutional interleaver is considered. The channel is the Rician Channel with the line-of-sight component subject to a lognormal transformation.
NASA Technical Reports Server (NTRS)
Cartier, D. E.
1973-01-01
A convolutional coding theory is given for the IME and the Heliocentric spacecraft. The amount of coding gain needed by the mission is determined. Recommendations are given for an encoder/decoder system to provide the gain along with an evaluation of the impact of the system on the space network in terms of costs and complexity.
Nada: A new code for studying self-gravitating tori around black holes
NASA Astrophysics Data System (ADS)
Montero, Pedro J.; Font, José A.; Shibata, Masaru
2008-09-01
We present a new two-dimensional numerical code called Nada designed to solve the full Einstein equations coupled to the general relativistic hydrodynamics equations. The code is mainly intended for studies of self-gravitating accretion disks (or tori) around black holes, although it is also suitable for regular spacetimes. Concerning technical aspects the Einstein equations are formulated and solved in the code using a formulation of the standard 3+1 Arnowitt-Deser-Misner canonical formalism system, the so-called Baumgarte-Shapiro Shibata-Nakamura approach. A key feature of the code is that derivative terms in the spacetime evolution equations are computed using a fourth-order centered finite difference approximation in conjunction with the Cartoon method to impose the axisymmetry condition under Cartesian coordinates (the choice in Nada), and the puncture/moving puncture approach to carry out black hole evolutions. Correspondingly, the general relativistic hydrodynamics equations are written in flux-conservative form and solved with high-resolution, shock-capturing schemes. We perform and discuss a number of tests to assess the accuracy and expected convergence of the code, namely, (single) black hole evolutions, shock tubes, and evolutions of both spherical and rotating relativistic stars in equilibrium, the gravitational collapse of a spherical relativistic star leading to the formation of a black hole. In addition, paving the way for specific applications of the code, we also present results from fully general relativistic numerical simulations of a system formed by a black hole surrounded by a self-gravitating torus in equilibrium.
ICD-10 procedure codes produce transition challenges
Boyd, Andrew D.; Li, Jianrong ‘John’; Kenost, Colleen; Zaim, Samir Rachid; Krive, Jacob; Mittal, Manish; Satava, Richard A.; Burton, Michael; Smith, Jacob; Lussier, Yves A.
2018-01-01
The transition of procedure coding from ICD-9-CM-Vol-3 to ICD-10-PCS has generated problems for the medical community at large resulting from the lack of clarity required to integrate two non-congruent coding systems. We hypothesized that quantifying these issues with network topology analyses offers a better understanding of the issues, and therefore we developed solutions (online tools) to empower hospital administrators and researchers to address these challenges. Five topologies were identified: “identity”(I), “class-to-subclass”(C2S), “subclass-toclass”(S2C), “convoluted(C)”, and “no mapping”(NM). The procedure codes in the 2010 Illinois Medicaid dataset (3,290 patients, 116 institutions) were categorized as C=55%, C2S=40%, I=3%, NM=2%, and S2C=1%. Majority of the problematic and ambiguous mappings (convoluted) pertained to operations in ophthalmology cardiology, urology, gyneco-obstetrics, and dermatology. Finally, the algorithms were expanded into a user-friendly tool to identify problematic topologies and specify lists of procedural codes utilized by medical professionals and researchers for mitigating error-prone translations, simplifying research, and improving quality.http://www.lussiergroup.org/transition-to-ICD10PCS PMID:29888037
Investigation of the Use of Erasures in a Concatenated Coding Scheme
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Marriott, Philip J.
1997-01-01
A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Giacometto, F.; Torres, C. O.; Mattos, L.
2011-01-01
The two-dimensional Fast Fourier Transform (FFT 2D) is an essential tool in the two-dimensional discrete signals analysis and processing, which allows developing a large number of applications. This article shows the description and synthesis in VHDL code of the FFT 2D with fixed point binary representation using the programming tool Simulink HDL Coder of Matlab; showing a quick and easy way to handle overflow, underflow and the creation registers, adders and multipliers of complex data in VHDL and as well as the generation of test bench for verification of the codes generated in the ModelSim tool. The main objective of development of the hardware architecture of the FFT 2D focuses on the subsequent completion of the following operations applied to images: frequency filtering, convolution and correlation. The description and synthesis of the hardware architecture uses the XC3S1200E family Spartan 3E FPGA from Xilinx Manufacturer.
Moody, Daniela; Wohlberg, Brendt
2018-01-02
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.
A digital communications system for manned spaceflight applications.
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.
1973-01-01
A highly efficient, all-digital communications signal design employing convolutional coding and PN spectrum spreading is described for two-way transmission of voice and data between a manned spacecraft and ground. Variable-slope delta modulation is selected for analog/digital conversion of the voice signal, and a convolutional decoder utilizing the Viterbi decoding algorithm is selected for use at each receiving terminal. A PN spread spectrum technique is implemented to protect against multipath effects and to reduce the energy density (per unit bandwidth) impinging on the earth's surface to a value within the guidelines adopted by international agreement. Performance predictions are presented for transmission via a TDRS (tracking and data relay satellite) system and for direct transmission between the spacecraft and earth. Hardware estimates are provided for a flight-qualified communications system employing the coded digital signal design.
tf_unet: Generic convolutional neural network U-Net implementation in Tensorflow
NASA Astrophysics Data System (ADS)
Akeret, Joel; Chang, Chihway; Lucchi, Aurelien; Refregier, Alexandre
2016-11-01
tf_unet mitigates radio frequency interference (RFI) signals in radio data using a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. The code is not tied to a specific segmentation and can be used, for example, to detect radio frequency interference (RFI) in radio astronomy or galaxies and stars in widefield imaging data. This U-Net implementation can outperform classical RFI mitigation algorithms.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
Coding for spread spectrum packet radios
NASA Technical Reports Server (NTRS)
Omura, J. K.
1980-01-01
Packet radios are often expected to operate in a radio communication network environment where there tends to be man made interference signals. To combat such interference, spread spectrum waveforms are being considered for some applications. The use of convolutional coding with Viterbi decoding to further improve the performance of spread spectrum packet radios is examined. At 0.00001 bit error rates, improvements in performance of 4 db to 5 db can easily be achieved with such coding without any change in data rate nor spread spectrum bandwidth. This coding gain is more dramatic in an interference environment.
Protograph LDPC Codes for the Erasure Channel
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L
2018-04-01
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
SENR /NRPy + : Numerical relativity in singular curvilinear coordinate systems
NASA Astrophysics Data System (ADS)
Ruchlin, Ian; Etienne, Zachariah B.; Baumgarte, Thomas W.
2018-03-01
We report on a new open-source, user-friendly numerical relativity code package called SENR /NRPy + . Our code extends previous implementations of the BSSN reference-metric formulation to a much broader class of curvilinear coordinate systems, making it ideally suited to modeling physical configurations with approximate or exact symmetries. In the context of modeling black hole dynamics, it is orders of magnitude more efficient than other widely used open-source numerical relativity codes. NRPy + provides a Python-based interface in which equations are written in natural tensorial form and output at arbitrary finite difference order as highly efficient C code, putting complex tensorial equations at the scientist's fingertips without the need for an expensive software license. SENR provides the algorithmic framework that combines the C codes generated by NRPy + into a functioning numerical relativity code. We validate against two other established, state-of-the-art codes, and achieve excellent agreement. For the first time—in the context of moving puncture black hole evolutions—we demonstrate nearly exponential convergence of constraint violation and gravitational waveform errors to zero as the order of spatial finite difference derivatives is increased, while fixing the numerical grids at moderate resolution in a singular coordinate system. Such behavior outside the horizons is remarkable, as numerical errors do not converge to zero near punctures, and all points along the polar axis are coordinate singularities. The formulation addresses such coordinate singularities via cell-centered grids and a simple change of basis that analytically regularizes tensor components with respect to the coordinates. Future plans include extending this formulation to allow dynamical coordinate grids and bispherical-like distribution of points to efficiently capture orbiting compact binary dynamics.
On the application of under-decimated filter banks
NASA Technical Reports Server (NTRS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-01-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate. Furthermore, for both systems, the implementation cost of the analysis or synthesis bank is comparable to that of one prototype filter plus some low-complexity modulation matrices. The individual analysis and synthesis filters have complex coefficients in the DFT filter banks but have real coefficients in the cosine modulated filter banks.
On the application of under-decimated filter banks
NASA Astrophysics Data System (ADS)
Lin, Y.-P.; Vaidyanathan, P. P.
1994-11-01
Maximally decimated filter banks have been extensively studied in the past. A filter bank is said to be under-decimated if the number of channels is more than the decimation ratio in the subbands. A maximally decimated filter bank is well known for its application in subband coding. Another application of maximally decimated filter banks is in block filtering. Convolution through block filtering has the advantages that parallelism is increased and data are processed at a lower rate. However, the computational complexity is comparable to that of direct convolution. More recently, another type of filter bank convolver has been developed. In this scheme, the convolution is performed in the subbands. Quantization and bit allocation of subband signals are based on signal variance, as in subband coding. Consequently, for a fixed rate, the result of convolution is more accurate than is direct convolution. This type of filter bank convolver also enjoys the advantages of block filtering, parallelism, and a lower working rate. Nevertheless, like block filtering, there is no computational saving. In this article, under-decimated systems are introduced to solve the problem. The new system is decimated only by half the number of channels. Two types of filter banks can be used in the under-decimated system: the discrete Fourier transform (DFT) filter banks and the cosine modulated filter banks. They are well known for their low complexity. In both cases, the system is approximately alias free, and the overall response is equivalent to a tunable multilevel filter. Properties of the DFT filter banks and the cosine modulated filter banks can be exploited to simultaneously achieve parallelism, computational saving, and a lower working rate.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
1996-01-01
Real-Time 19 5 Conclusion 23 List of References 25 ii LIST OF FIGURES FIGURE PAGE 3-1 Test Bench Pseudo Code 7 3-2 Fast Convolution...3-1 shows pseudo - code for a test bench with two application nodes. The outer test bench wrapper consists of three functions: pipeline_init, pipeline...exit_func); Figure 3-1. Test Bench Pseudo Code The application wrapper is contained in the pipeline routine and similarly consists of an
High data rate coding for the space station telemetry links.
NASA Technical Reports Server (NTRS)
Lumb, D. R.; Viterbi, A. J.
1971-01-01
Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.
Viterbi decoder node synchronization losses in the Reed-Solomon/Veterbi concatenated channel
NASA Technical Reports Server (NTRS)
Deutsch, L. J.; Miller, R. L.
1982-01-01
The Viterbi decoders currently used by the Deep Space Network (DSN) employ an algorithm for maintaining node synchronization that significantly degrades at bit signal-to-noise ratios (SNRs) of below 2.0 dB. In a recent report by the authors, it was shown that the telemetry receiving system, which uses a convolutionally encoded downlink, will suffer losses of 0.85 dB and 1.25 dB respectively at Voyager 2 Uranus and Neptune encounters. This report extends the results of that study to a concatenated (255,223) Reed-Solomon/(7, 1/2) convolutionally coded channel, by developing a new radio loss model for the concatenated channel. It is shown here that losses due to improper node synchronization of 0.57 dB at Uranus and 1.0 dB at Neptune can be expected if concatenated coding is used along with an array of one 64-meter and three 34-meter antennas.
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences
NASA Astrophysics Data System (ADS)
Tajima, Masato; Okino, Koji; Miyagoshi, Takashi
In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
2016-08-04
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Advanced imaging communication system
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1977-01-01
Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
The Composite Analytic and Simulation Package or RFI (CASPR) on a coded channel
NASA Technical Reports Server (NTRS)
Freedman, Jeff; Berman, Ted
1993-01-01
CASPR is an analysis package which determines the performance of a coded signal in the presence of Radio Frequency Interference (RFI) and Additive White Gaussian Noise (AWGN). It can analyze a system with convolutional coding, Reed-Solomon (RS) coding, or a concatenation of the two. The signals can either be interleaved or non-interleaved. The model measures the system performance in terms of either the E(sub b)/N(sub 0) required to achieve a given Bit Error Rate (BER) or the BER needed for a constant E(sub b)/N(sub 0).
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
Chromatin accessibility prediction via a hybrid deep convolutional neural network.
Liu, Qiao; Xia, Fei; Yin, Qijin; Jiang, Rui
2018-03-01
A majority of known genetic variants associated with human-inherited diseases lie in non-coding regions that lack adequate interpretation, making it indispensable to systematically discover functional sites at the whole genome level and precisely decipher their implications in a comprehensive manner. Although computational approaches have been complementing high-throughput biological experiments towards the annotation of the human genome, it still remains a big challenge to accurately annotate regulatory elements in the context of a specific cell type via automatic learning of the DNA sequence code from large-scale sequencing data. Indeed, the development of an accurate and interpretable model to learn the DNA sequence signature and further enable the identification of causative genetic variants has become essential in both genomic and genetic studies. We proposed Deopen, a hybrid framework mainly based on a deep convolutional neural network, to automatically learn the regulatory code of DNA sequences and predict chromatin accessibility. In a series of comparison with existing methods, we show the superior performance of our model in not only the classification of accessible regions against background sequences sampled at random, but also the regression of DNase-seq signals. Besides, we further visualize the convolutional kernels and show the match of identified sequence signatures and known motifs. We finally demonstrate the sensitivity of our model in finding causative noncoding variants in the analysis of a breast cancer dataset. We expect to see wide applications of Deopen with either public or in-house chromatin accessibility data in the annotation of the human genome and the identification of non-coding variants associated with diseases. Deopen is freely available at https://github.com/kimmo1019/Deopen. ruijiang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
1981-01-01
A hardware integrated convolutional coding/symbol interleaving and integrated symbol deinterleaving/Viterbi decoding simulation system is described. Validation on the system of the performance of the TDRSS S-band return link with BPSK modulation, operating in a pulsed RFI environment is included. The system consists of three components, the Fast Linkabit Error Rate Tester (FLERT), the Transition Probability Generator (TPG), and a modified LV7017B which includes rate 1/3 capability as well as a periodic interleaver/deinterleaver. Operating and maintenance manuals for each of these units are included.
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua
2017-01-01
The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
Image statistics decoding for convolutional codes
NASA Technical Reports Server (NTRS)
Pitt, G. H., III; Swanson, L.; Yuen, J. H.
1987-01-01
It is a fact that adjacent pixels in a Voyager image are very similar in grey level. This fact can be used in conjunction with the Maximum-Likelihood Convolutional Decoder (MCD) to decrease the error rate when decoding a picture from Voyager. Implementing this idea would require no changes in the Voyager spacecraft and could be used as a backup to the current system without too much expenditure, so the feasibility of it and the possible gains for Voyager were investigated. Simulations have shown that the gain could be as much as 2 dB at certain error rates, and experiments with real data inspired new ideas on ways to get the most information possible out of the received symbol stream.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.; Seetharaman, G.; Feng, G. L.
1996-01-01
With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.
A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4
NASA Technical Reports Server (NTRS)
Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.
1998-01-01
Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen
2018-04-26
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Foissner, Ilse; Sommer, Aniela; Hoeftberger, Margit
2015-07-01
The characean green alga Chara australis forms complex plasma membrane convolutions called charasomes when exposed to light. Charasomes are involved in local acidification of the surrounding medium which facilitates carbon uptake required for photosynthesis. They have hitherto been only described in the internodal cells and in close contact with the stationary chloroplasts. Here, we show that charasomes are not only present in the internodal cells of the main axis, side branches, and branchlets but that the plasma membranes of chloroplast-containing nodal cells, protonemata, and rhizoids are also able to invaginate into complex domains. Removal of chloroplasts by local irradiation with intense light revealed that charasomes can develop at chloroplast-free "windows" and that the resulting pH banding pattern is independent of chloroplast or window position. Charasomes were not detected along cell walls containing functional plasmodesmata. However, charasomes formed next to a smooth wound wall which was deposited onto the plasmodesmata-containing wall when the neighboring cell was damaged. In contrast, charasomes were rarely found at uneven, bulged wound walls which protrude into the streaming endoplasm and which were induced by ligation or puncturing. The results of this study show that charasome formation, although dependent on photosynthesis, does not require intimate contact with chloroplasts. Our data suggest further that the presence of plasmodesmata inhibits charasome formation and/or that exposure to the outer medium is a prerequisite for charasome formation. Finally, we hypothesize that the absence of charasomes at bulged wound walls is due to the disturbance of uniform laminar mass streaming.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
NASA Astrophysics Data System (ADS)
Cui, Tie Jun; Wu, Rui Yuan; Wu, Wei; Shi, Chuan Bo; Li, Yun Bo
2017-10-01
We propose fast and accurate designs to large-scale and low-profile transmission-type anisotropic coding metasurfaces with multiple functions in the millimeter-wave frequencies based on the antenna-array method. The numerical simulation of an anisotropic coding metasurface with the size of 30λ × 30λ by the proposed method takes only 20 min, which however cannot be realized by commercial software due to huge memory usage in personal computers. To inspect the performance of coding metasurfaces in the millimeter-wave band, the working frequency is chosen as 60 GHz. Based on the convolution operations and holographic theory, the proposed multifunctional anisotropic coding metasurface exhibits different effects excited by y-polarized and x-polarized incidences. This study extends the frequency range of coding metasurfaces, filling the gap between microwave and terahertz bands, and implying promising applications in millimeter-wave communication and imaging.
Decoder synchronization for deep space missions
NASA Technical Reports Server (NTRS)
Statman, J. I.; Cheung, K.-M.; Chauvin, T. H.; Rabkin, J.; Belongie, M. L.
1994-01-01
The Consultative Committee for Space Data Standards (CCSDS) recommends that space communication links employ a concatenated, error-correcting, channel-coding system in which the inner code is a convolutional (7,1/2) code and the outer code is a (255,223) Reed-Solomon code. The traditional implementation is to perform the node synchronization for the Viterbi decoder and the frame synchronization for the Reed-Solomon decoder as separate, sequential operations. This article discusses a unified synchronization technique that is required for deep space missions that have data rates and signal-to-noise ratios (SNR's) that are extremely low. This technique combines frame synchronization in the bit and symbol domains and traditional accumulated-metric growth techniques to establish a joint frame and node synchronization. A variation on this technique is used for the Galileo spacecraft on its Jupiter-bound mission.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
A lncRNA Perspective into (Re)Building the Heart.
Frank, Stefan; Aguirre, Aitor; Hescheler, Juergen; Kurian, Leo
2016-01-01
Our conception of the human genome, long focused on the 2% that codes for proteins, has profoundly changed since its first draft assembly in 2001. Since then, an unanticipatedly expansive functionality and convolution has been attributed to the majority of the genome that is transcribed in a cell-type/context-specific manner into transcripts with no apparent protein coding ability. While the majority of these transcripts, currently annotated as long non-coding RNAs (lncRNAs), are functionally uncharacterized, their prominent role in embryonic development and tissue homeostasis, especially in the context of the heart, is emerging. In this review, we summarize and discuss the latest advances in understanding the relevance of lncRNAs in (re)building the heart.
Boyd, Andrew D; Li, Jianrong ‘John’; Burton, Mike D; Jonen, Michael; Gardeux, Vincent; Achour, Ikbel; Luo, Roger Q; Zenku, Ilir; Bahroos, Neil; Brown, Stephen B; Vanden Hoek, Terry; Lussier, Yves A
2013-01-01
Objective Applying the science of networks to quantify the discriminatory impact of the ICD-9-CM to ICD-10-CM transition between clinical specialties. Materials and Methods Datasets were the Center for Medicaid and Medicare Services ICD-9-CM to ICD-10-CM mapping files, general equivalence mappings, and statewide Medicaid emergency department billing. Diagnoses were represented as nodes and their mappings as directional relationships. The complex network was synthesized as an aggregate of simpler motifs and tabulation per clinical specialty. Results We identified five mapping motif categories: identity, class-to-subclass, subclass-to-class, convoluted, and no mapping. Convoluted mappings indicate that multiple ICD-9-CM and ICD-10-CM codes share complex, entangled, and non-reciprocal mappings. The proportions of convoluted diagnoses mappings (36% overall) range from 5% (hematology) to 60% (obstetrics and injuries). In a case study of 24 008 patient visits in 217 emergency departments, 27% of the costs are associated with convoluted diagnoses, with ‘abdominal pain’ and ‘gastroenteritis’ accounting for approximately 3.5%. Discussion Previous qualitative studies report that administrators and clinicians are likely to be challenged in understanding and managing their practice because of the ICD-10-CM transition. We substantiate the complexity of this transition with a thorough quantitative summary per clinical specialty, a case study, and the tools to apply this methodology easily to any clinical practice in the form of a web portal and analytic tables. Conclusions Post-transition, successful management of frequent diseases with convoluted mapping network patterns is critical. The http://lussierlab.org/transition-to-ICD10CM web portal provides insight in linking onerous diseases to the ICD-10 transition. PMID:23645552
Review of image processing fundamentals
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1985-01-01
Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.
Performance of convolutionally encoded noncoherent MFSK modem in fading channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1976-01-01
The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.
RIO: a new computational framework for accurate initial data of binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-06-01
We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.
Valeri, Beatriz Oliveira; Gaspardo, Cláudia Maria; Martinez, Francisco Eulógio; Linhares, Maria Beatriz Martins
2018-01-03
Preterm infants (PI) requiring the Neonatal Intensive Care Unit (NICU) are exposed to early repetitive pain/distress. Little is known about how pain relief strategies interact with infants'clinical health status, such as severity of illness with pain responses. This study aimed to examine main and interactive effects of routine sucrose intervention and neonatal clinical risk (NCR) on biobehavioral pain reactivity-recovery in PI during painful blood collection procedures. Very-low birthweight PI (n=104) were assigned to Low and High Clinical Risk Groups, according to the Clinical Risk Index for Babies. Sucrose-Group (SG; n=52) received sucrose solution (25%; 0.5▒mL/Kg) two minutes before the procedures and Control-Group (CG) received standard-care. Biobehavioral pain reactivity-recovery was assessed according to the Neonatal Facial Coding System, Sleep-wake state scale, crying time, and heart rate (HR) at five phases (Baseline, Antisepsis, Puncture, Recovery-Dressing and Recovery-Resting). Repeated measure ANOVA with mixed-design was performed considering pain assessment phases, intervention group, and NCR. Independent of NCR, sucrose presented main effect in decreasing neonates' facial activity pain responses and crying time, during Puncture and Recovery-Resting. Independent of NCR level or routine sucrose intervention, all neonates displayed activated state in Puncture and decreased biobehavioral responses in Recovery-Resting phase. Although no sucrose or NCR effects were observed on physiological reactivity, all neonates exhibited physiological recovery 10 minutes after puncture, reaching the same HR patterns as the Baseline. Independent of NCR level, sucrose intervention for pain relief during acute painful procedures was effective to reduce pain intensity and increase biobehavioral regulation.
USDA-ARS?s Scientific Manuscript database
KCC3 and KCC1 are potassium chloride transporters with partially overlapping function, and KCC3 knockout mice exhibit hypertension. Two KCC3 isoforms differ by alternate promoters and first coding exons: KCC3a is widely expressed, and KCC3b is highly expressed in kidney proximal convoluted tubule. W...
Method for Veterbi decoding of large constraint length convolutional codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)
1988-01-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Performance of the ICAO standard core service modulation and coding techniques
NASA Technical Reports Server (NTRS)
Lodge, John; Moher, Michael
1988-01-01
Aviation binary phase shift keying (A-BPSK) is described and simulated performance results are given that demonstrate robust performance in the presence of hardlimiting amplifiers. The performance of coherently-detected A-BPSK with rate 1/2 convolutional coding are given. The performance loss due to the Rician fading was shown to be less than 1 dB over the simulated range. A partially coherent detection scheme that does not require carrier phase recovery was described. This scheme exhibits similiar performance to coherent detection, at high bit error rates, while it is superior at lower bit error rates.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
The digestive system of the stony coral Stylophora pistillata.
Raz-Bahat, M; Douek, J; Moiseeva, E; Peters, E C; Rinkevich, B
2017-05-01
Because hermatypic species use symbiotic algal photosynthesis, most of the literature in this field focuses on this autotrophic mode and very little research has studied the morphology of the coral's digestive system or the digestion process of particulate food. Using histology and histochemestry, our research reveals that Stylophora pistillata's digestive system is concentrated at the corals' peristome, actinopharynx and mesenterial filaments (MF). We used in-situ hybridization (ISH) of the RNA transcript of the gene that codes for the S. pistillata digestive enzyme, chymotrypsinogen, to shed light on the functionality of the digestive system. Both the histochemistry and the ISH pointed to the MF being specialized digestive organs, equipped with large numbers of acidophilic and basophilic granular gland cells, as well as acidophilic non-granular gland cells, some of which produce chymotrypsinogen. We identified two types of MF: short, trilobed MF and unilobed, long and convoluted MF. Each S. pistillata polyp harbors two long convoluted MF and 10 short MF. While the short MF have neither secreting nor stinging cells, each of the convoluted MF display gradual cytological changes along their longitudinal axis, alternating between stinging and secreting cells and three distinctive types of secretory cells. These observations indicate the important digestive role of the long convoluted MF. They also indicate the existence of novel feeding compartments in the gastric cavity of the polyp, primarily in the nutritionally active peristome, in the actinopharynx and in three regions of the MF that differ from each other in their cellular components, general morphology and chymotrypsinogen excretion.
Cutting performance orthogonal test of single plane puncture biopsy needle based on puncture force
NASA Astrophysics Data System (ADS)
Xu, Yingqiang; Zhang, Qinhe; Liu, Guowei
2017-04-01
Needle biopsy is a method to extract the cells from the patient's body with a needle for tissue pathological examination. Many factors affect the cutting process of soft tissue, including the geometry of the biopsy needle, the mechanical properties of the soft tissue, the parameters of the puncture process and the interaction between them. This paper conducted orthogonal experiment of main cutting parameters based on single plane puncture biopsy needle, and obtained the cutting force curve of single plane puncture biopsy needle by studying the influence of the inclination angle, diameter and velocity of the single plane puncture biopsy needle on the puncture force of the biopsy needle. Stage analysis of the cutting process of biopsy needle puncture was made to determine the main influencing factors of puncture force during the cutting process, which provides a certain theoretical support for the design of new type of puncture biopsy needle and the operation of puncture biopsy.
NASA Astrophysics Data System (ADS)
de Schryver, C.; Weithoffer, S.; Wasenmüller, U.; Wehn, N.
2012-09-01
Channel coding is a standard technique in all wireless communication systems. In addition to the typically employed methods like convolutional coding, turbo coding or low density parity check (LDPC) coding, algebraic codes are used in many cases. For example, outer BCH coding is applied in the DVB-S2 standard for satellite TV broadcasting. A key operation for BCH and the related Reed-Solomon codes are multiplications in finite fields (Galois Fields), where extension fields of prime fields are used. A lot of architectures for multiplications in finite fields have been published over the last decades. This paper examines four different multiplier architectures in detail that offer the potential for very high throughputs. We investigate the implementation performance of these multipliers on FPGA technology in the context of channel coding. We study the efficiency of the multipliers with respect to area, frequency and throughput, as well as configurability and scalability. The implementation data of the fully verified circuits are provided for a Xilinx Virtex-4 device after place and route.
Coded aperture detector: an image sensor with sub 20-nm pixel resolution.
Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick
2014-08-11
We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
Design study of a HEAO-C spread spectrum transponder telemetry system for use with the TDRSS subnet
NASA Technical Reports Server (NTRS)
Weathers, G.
1975-01-01
The results of a design study of a spread spectrum transponder for use on the HEAO-C satellite were given. The transponder performs the functions of code turn-around for ground range and range-rate determination, ground command receiver, and telemetry data transmitter. The spacecraft transponder and associated communication system components will allow the HEAO-C satellite to utilize the Tracking and Data Relay Satellite System (TDRSS) subnet of the post 1978 STDN. The following areas were discussed in the report: TDRSS Subnet Description, TDRSS-HEAO-C System Configuration, Gold Code Generator, Convolutional Encoder Design and Decoder Algorithm, High Speed Sequence Generators, Statistical Evaluation of Candidate Code Sequences using Amplitude and Phase Moments, Code and Carrier Phase Lock Loops, Total Spread Spectrum Transponder System, and Reference Literature Search.
Viterbi decoding for satellite and space communication.
NASA Technical Reports Server (NTRS)
Heller, J. A.; Jacobs, I. M.
1971-01-01
Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.
Design and Implementation of Viterbi Decoder Using VHDL
NASA Astrophysics Data System (ADS)
Thakur, Akash; Chattopadhyay, Manju K.
2018-03-01
A digital design conversion of Viterbi decoder for ½ rate convolutional encoder with constraint length k = 3 is presented in this paper. The design is coded with the help of VHDL, simulated and synthesized using XILINX ISE 14.7. Synthesis results show a maximum frequency of operation for the design is 100.725 MHz. The requirement of memory is less as compared to conventional method.
NASA Astrophysics Data System (ADS)
Nakamura, Yusuke; Hoshizawa, Taku
2016-09-01
Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.
Using a Motion Sensor-Equipped Smartphone to Facilitate CT-Guided Puncture.
Hirata, Masaaki; Watanabe, Ryouhei; Koyano, Yasuhiro; Sugata, Shigenori; Takeda, Yukie; Nakamura, Seiji; Akamune, Akihisa; Tsuda, Takaharu; Mochizuki, Teruhito
2017-04-01
To demonstrate the use of "Smart Puncture," a smartphone application to assist conventional CT-guided puncture without CT fluoroscopy, and to describe the advantages of this application. A puncture guideline is displayed by entering the angle into the application. Regardless of the angle at which the device is being held, the motion sensor ensures that the guideline is displayed at the appropriate angle with respect to gravity. The angle of the smartphone's liquid crystal display (LCD) is also detected, preventing needle deflection from the CT slice image. Physicians can perform the puncture procedure by advancing the needle using the guideline while the smartphone is placed adjacent to the patient. In an experimental puncture test using a sponge as a target, the target was punctured at 30°, 50°, and 70° when the device was tilted to 0°, 15°, 30°, and 45°, respectively. The punctured target was then imaged with a CT scan, and the puncture error was measured. The mean puncture error in the plane parallel to the LCD was less than 2°, irrespective of device tilt. The mean puncture error in the sagittal plane was less than 3° with no device tilt. However, the mean puncture error tended to increase when the tilt was increased. This application can transform a smartphone into a valuable tool that is capable of objectively and accurately assisting CT-guided puncture procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bobrek, Miljko; Albright, Austin P
This paper presents FPGA implementation of the Reed-Solomon decoder for use in IEEE 802.16 WiMAX systems. The decoder is based on RS(255,239) code, and is additionally shortened and punctured according to the WiMAX specifications. Simulink model based on Sysgen library of Xilinx blocks was used for simulation and hardware implementation. At the end, simulation results and hardware implementation performances are presented.
Marts, Donna J.; Barker, Stacey G.; McQueen, Miles A.
1996-01-01
A portable barrier strip having retractable tire-puncture means for puncturing a vehicle tire. The tire-puncture means, such as spikes, have an armed position for puncturing a tire and a retracted position for not puncturing a tire. The strip comprises a plurality of barrier blocks having the tire-puncture means removably disposed in a shaft that is rotatably disposed in each barrier block. The shaft removably and pivotally interconnects the plurality of barrier blocks. Actuation cables cause the shaft to rotate the tire-puncture means to the armed position for puncturing a vehicle tire and to the retracted position for not puncturing the tire. Each tire-puncture means is received in a hollow-bed portion of its respective barrier block when in the retracted position. The barrier strip rests stable in its deployed position and substantially motionless as a tire rolls thereon and over. The strip is rolled up for retrieval, portability, and storage purposes, and extended and unrolled in its deployed position for use.
Marts, D.J.; Barker, S.G.; McQueen, M.A.
1996-04-16
A portable barrier strip is described having retractable tire-puncture means for puncturing a vehicle tire. The tire-puncture means, such as spikes, have an armed position for puncturing a tire and a retracted position for not puncturing a tire. The strip comprises a plurality of barrier blocks having the tire-puncture means removably disposed in a shaft that is rotatably disposed in each barrier block. The shaft removably and pivotally interconnects the plurality of barrier blocks. Actuation cables cause the shaft to rotate the tire-puncture means to the armed position for puncturing a vehicle tire and to the retracted position for not puncturing the tire. Each tire-puncture means is received in a hollow-bed portion of its respective barrier block when in the retracted position. The barrier strip rests in its deployed position and substantially motionless as a tire rolls thereon and over. The strip is rolled up for retrieval, portability, and storage purposes, and extended and unrolled in its deployed position for use. 13 figs.
Klein, Jan Thorsten; Rassweiler, Jens; Rassweiler-Seyfried, Marie-Claire Charlotte
2018-03-29
Nephrolithiasis is one of the most common diseases in urology. According to the EAU Guidelines, a percutaneous nephrolitholapaxy (PNL) is recommended when treating a kidney stone >2 cm. Nowadays PNL is performed even for smaller stones (<1 cm) using miniaturized instruments. The most challenging part of any PNL is the puncture of the planned site. PNL-novice surgeons need to practice this step in a safe environment with an ideal training model. We developed and evaluated a new, easy to produce, in-vitro model for the training of the freehand puncture of the kidney. Porcine kidneys with ureters were embedded in ballistic gel. Food coloring and preservative agent were added. We used the standard imaging modalities of X-ray and ultrasound to validate the training model. An additional new technique, the iPAD guided puncture, was evaluated. Five novices and three experts conducted 12 punctures for each imaging technique. Puncture time, radiation dose, and number of attempts to a successful puncture were measured. Mann-Whitney-U, Kruskal-Wallis, and U-Tests were used for statistical analyses. The sonographic guided puncture is slightly but not significantly faster than the fluoroscopic guided puncture and the iPAD assisted puncture. Similarly, the most experienced surgeon's time for a successful puncture was slightly less than that of the residents, and the experienced surgeons needed the least attempts to perform a successful puncture. In terms of radiation exposure, the residents had a significant reduction of radiation exposure compared to the experienced surgeons. The newly developed ballistic gel kidney-puncture model is a good training tool for a variety of kidney puncture techniques, with good content, construct, and face validity.
Mini access guide to simplify calyceal access during percutaneous nephrolithotomy: A novel device.
Chowdhury, Puskar Shyam; Nayak, Prasant; David, Deepak; Mallick, Sujata
2017-01-01
A precise puncture of the renal collecting system is the most essential step for percutaneous nephrolithotomy (PCNL). There are many techniques describing this crucial first step in PCNL including the bull's eye technique, triangulation technique, free-hand technique, and gradual descensus technique. We describe a novel puncture guide to assist accurate percutaneous needle placement during bull's eye technique. The mini access guide (MAG) stabilizes the initial puncture needle by mounting it on an adjustable multidirectional carrier fixed to the patient's skin, which aids in achieving the "bull's eye" puncture. It also avoids a direct fluoroscopic exposure of the urologist's hand during the puncture. Sixty consecutive patients with solitary renal calculus were randomized to traditional hand versus MAG puncture during bull's eye technique of puncture and the fluoroscopy time was assessed. The median fluoroscopy screening time for traditional free-hand bull's eye and MAG-guided bull's eye puncture (fluoroscopic screening time for puncture) was 55 versus 21 s ( P = 0.001) and the median time to puncture was 80 versus 55 s ( P = 0.052), respectively. Novice residents also learned puncture technique faster with MAG on simulator. The MAG is a simple, portable, cheap, and novel assistant to achieve successful PCNL puncture. It would be of great help for novices to establish access during their learning phase of PCNL. It would also be an asset toward significantly decreasing the radiation dose during PCNL access.
A finite element conjugate gradient FFT method for scattering
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Ross, Dan; Jin, J.-M.; Chatterjee, A.; Volakis, John L.
1991-01-01
Validated results are presented for the new 3D body of revolution finite element boundary integral code. A Fourier series expansion of the vector electric and mangnetic fields is employed to reduce the dimensionality of the system, and the exact boundary condition is employed to terminate the finite element mesh. The mesh termination boundary is chosen such that is leads to convolutional boundary operatores of low O(n) memory demand. Improvements of this code are discussed along with the proposed formulation for a full 3D implementation of the finite element boundary integral method in conjunction with a conjugate gradiant fast Fourier transformation (CGFFT) solution.
Lingafelter, Steven W; Nearns, Eugenio H
2013-01-01
We present an overview of the difficulties sometimes encountered when determining whether a published name following a binomen is available or infrasubspecific and unavailable, following Article 45.6 of the International Code of Zoological Nomenclature (ICZN, 1999). We propose a dichotomous key that facilitates this determination and as a preferable method, given the convoluted and subordinate discussion, exceptions, and qualifications laid out in ICZN (1999: 49-50). Examples and citations are provided for each case one can encounter while making this assessment of availability status of names following the binomen.
Distillation with Sublogarithmic Overhead.
Hastings, Matthew B; Haah, Jeongwan
2018-02-02
It has been conjectured that, for any distillation protocol for magic states for the T gate, the number of noisy input magic states required per output magic state at output error rate ε is Ω[log(1/ε)]. We show that this conjecture is false. We find a family of quantum error correcting codes of parameters ⟦∑[under i=w+1][over m](m/i),∑[under i=0][over w](m/i),∑[under i=w+1][over r+1](r+1/i)⟧ for any integers m>2r, r>w≥0, by puncturing quantum Reed-Muller codes. When m>νr, our code admits a transversal logical gate at the νth level of Clifford hierarchy. In a distillation protocol for magic states at the level ν=3 (T gate), the ratio of input to output magic states is O(log^{γ}(1/ε)), where γ=log(n/k)/log(d)<0.678 for some m, r, w. The smallest code in our family for which γ<1 is on ≈2^{58} qubits.
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1995-01-01
The Galileo low-gain antenna mission will be supported by a coding system that uses a (14,1/4) inner convolutional code concatenated with Reed-Solomon codes of four different redundancies. Decoding for this code is designed to proceed in four distinct stages of Viterbi decoding followed by Reed-Solomon decoding. In each successive stage, the Reed-Solomon decoder only tries to decode the highest redundancy codewords not yet decoded in previous stages, and the Viterbi decoder redecodes its data utilizing the known symbols from previously decoded Reed-Solomon codewords. A previous article analyzed a two-stage decoding option that was not selected by Galileo. The present article analyzes the four-stage decoding scheme and derives the near-optimum set of redundancies selected for use by Galileo. The performance improvements relative to one- and two-stage decoding systems are evaluated.
Residual Highway Convolutional Neural Networks for in-loop Filtering in HEVC.
Zhang, Yongbing; Shen, Tao; Ji, Xiangyang; Zhang, Yun; Xiong, Ruiqin; Dai, Qionghai
2018-08-01
High efficiency video coding (HEVC) standard achieves half bit-rate reduction while keeping the same quality compared with AVC. However, it still cannot satisfy the demand of higher quality in real applications, especially at low bit rates. To further improve the quality of reconstructed frame while reducing the bitrates, a residual highway convolutional neural network (RHCNN) is proposed in this paper for in-loop filtering in HEVC. The RHCNN is composed of several residual highway units and convolutional layers. In the highway units, there are some paths that could allow unimpeded information across several layers. Moreover, there also exists one identity skip connection (shortcut) from the beginning to the end, which is followed by one small convolutional layer. Without conflicting with deblocking filter (DF) and sample adaptive offset (SAO) filter in HEVC, RHCNN is employed as a high-dimension filter following DF and SAO to enhance the quality of reconstructed frames. To facilitate the real application, we apply the proposed method to I frame, P frame, and B frame, respectively. For obtaining better performance, the entire quantization parameter (QP) range is divided into several QP bands, where a dedicated RHCNN is trained for each QP band. Furthermore, we adopt a progressive training scheme for the RHCNN where the QP band with lower value is used for early training and their weights are used as initial weights for QP band of higher values in a progressive manner. Experimental results demonstrate that the proposed method is able to not only raise the PSNR of reconstructed frame but also prominently reduce the bit-rate compared with HEVC reference software.
Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui
2017-07-15
Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k -mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k -mer co-occurrence information with recent advances in deep learning. We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k -mer embedding. We first split DNA sequences into k -mers and pre-train k -mer embedding vectors based on the co-occurrence matrix of k -mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k -mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm . tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn. Supplementary materials are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Min, Xu; Zeng, Wanwen; Chen, Ning; Chen, Ting; Jiang, Rui
2017-01-01
Abstract Motivation: Experimental techniques for measuring chromatin accessibility are expensive and time consuming, appealing for the development of computational approaches to predict open chromatin regions from DNA sequences. Along this direction, existing methods fall into two classes: one based on handcrafted k-mer features and the other based on convolutional neural networks. Although both categories have shown good performance in specific applications thus far, there still lacks a comprehensive framework to integrate useful k-mer co-occurrence information with recent advances in deep learning. Results: We fill this gap by addressing the problem of chromatin accessibility prediction with a convolutional Long Short-Term Memory (LSTM) network with k-mer embedding. We first split DNA sequences into k-mers and pre-train k-mer embedding vectors based on the co-occurrence matrix of k-mers by using an unsupervised representation learning approach. We then construct a supervised deep learning architecture comprised of an embedding layer, three convolutional layers and a Bidirectional LSTM (BLSTM) layer for feature learning and classification. We demonstrate that our method gains high-quality fixed-length features from variable-length sequences and consistently outperforms baseline methods. We show that k-mer embedding can effectively enhance model performance by exploring different embedding strategies. We also prove the efficacy of both the convolution and the BLSTM layers by comparing two variations of the network architecture. We confirm the robustness of our model to hyper-parameters by performing sensitivity analysis. We hope our method can eventually reinforce our understanding of employing deep learning in genomic studies and shed light on research regarding mechanisms of chromatin accessibility. Availability and implementation: The source code can be downloaded from https://github.com/minxueric/ismb2017_lstm. Contact: tingchen@tsinghua.edu.cn or ruijiang@tsinghua.edu.cn Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:28881969
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta; Gürsoy, Dogˇa
2017-03-01
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential for reducing or removing other artifacts caused by instrument instability, detector non-linearity, etc. An open-source toolbox, which integrates the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Multiscale deep features learning for land-use scene recognition
NASA Astrophysics Data System (ADS)
Yuan, Baohua; Li, Shijin; Li, Ning
2018-01-01
The features extracted from deep convolutional neural networks (CNNs) have shown their promise as generic descriptors for land-use scene recognition. However, most of the work directly adopts the deep features for the classification of remote sensing images, and does not encode the deep features for improving their discriminative power, which can affect the performance of deep feature representations. To address this issue, we propose an effective framework, LASC-CNN, obtained by locality-constrained affine subspace coding (LASC) pooling of a CNN filter bank. LASC-CNN obtains more discriminative deep features than directly extracted from CNNs. Furthermore, LASC-CNN builds on the top convolutional layers of CNNs, which can incorporate multiscale information and regions of arbitrary resolution and sizes. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods.
Learning Hierarchical Feature Extractors for Image Recognition
2012-09-01
space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional...parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for...pooling (dotted lines) is consistently higher than average pooling (solid lines), but the gap is much less signif - icant with intersection kernel (closed
An introduction to deep learning on biological sequence data: examples and solutions.
Jurtz, Vanessa Isabell; Johansen, Alexander Rosenberg; Nielsen, Morten; Almagro Armenteros, Jose Juan; Nielsen, Henrik; Sønderby, Casper Kaae; Winther, Ole; Sønderby, Søren Kaae
2017-11-15
Deep neural network architectures such as convolutional and long short-term memory networks have become increasingly popular as machine learning tools during the recent years. The availability of greater computational resources, more data, new algorithms for training deep models and easy to use libraries for implementation and training of neural networks are the drivers of this development. The use of deep learning has been especially successful in image recognition; and the development of tools, applications and code examples are in most cases centered within this field rather than within biology. Here, we aim to further the development of deep learning methods within biology by providing application examples and ready to apply and adapt code templates. Given such examples, we illustrate how architectures consisting of convolutional and long short-term memory neural networks can relatively easily be designed and trained to state-of-the-art performance on three biological sequence problems: prediction of subcellular localization, protein secondary structure and the binding of peptides to MHC Class II molecules. All implementations and datasets are available online to the scientific community at https://github.com/vanessajurtz/lasagne4bio. skaaesonderby@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Dynamic frame resizing with convolutional neural network for efficient video compression
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon
2017-09-01
In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Masaaki, E-mail: masaaki314@gmail.com; Watanabe, Ryouhei; Koyano, Yasuhiro
PurposeTo demonstrate the use of “Smart Puncture,” a smartphone application to assist conventional CT-guided puncture without CT fluoroscopy, and to describe the advantages of this application.Materials and MethodsA puncture guideline is displayed by entering the angle into the application. Regardless of the angle at which the device is being held, the motion sensor ensures that the guideline is displayed at the appropriate angle with respect to gravity. The angle of the smartphone’s liquid crystal display (LCD) is also detected, preventing needle deflection from the CT slice image. Physicians can perform the puncture procedure by advancing the needle using the guidelinemore » while the smartphone is placed adjacent to the patient. In an experimental puncture test using a sponge as a target, the target was punctured at 30°, 50°, and 70° when the device was tilted to 0°, 15°, 30°, and 45°, respectively. The punctured target was then imaged with a CT scan, and the puncture error was measured.ResultsThe mean puncture error in the plane parallel to the LCD was less than 2°, irrespective of device tilt. The mean puncture error in the sagittal plane was less than 3° with no device tilt. However, the mean puncture error tended to increase when the tilt was increased.ConclusionThis application can transform a smartphone into a valuable tool that is capable of objectively and accurately assisting CT-guided puncture procedures.« less
Tam, Matthew D B S; Lewis, Mark
2012-10-01
Safe femoral arterial access is an important procedural step in many interventional procedures and variations of the anatomy of the region are well known. The aim of this study was to redefine the anatomy relevant to the femoral arterial puncture and simulate the results of different puncture techniques. A total of 100 consecutive CT angiograms were used and regions of interest were labelled giving Cartesian co-ordinates which allowed determination of arterial puncture site relative to skin puncture site, the bifurcation and inguinal ligament (ING). The ING was lower than defined by bony landmarks by 16.6 mm. The femoral bifurcation was above the inferior aspect of the femoral head in 51% and entirely medial to the femoral head in 1%. Simulated antegrade and retrograde punctures with dogmatic technique, using a 45-degree angle would result in a significant rate of high and low arterial punctures. Simulated 50% soft tissue compression also resulted in decreased rate of high retrograde punctures but an increased rate of low antegrade punctures. Use of dogmatic access techniques is predicted to result in an unacceptably high rate of dangerous high and low punctures. Puncture angle and geometry can be severely affected by patient obesity. The combination of fluoroscopy to identify entry point, ultrasound-guidance to identify the femoral bifurcation and soft tissue compression to improve puncture geometry are critical for safe femoral arterial access.
Error Control Techniques for Satellite and Space Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1996-01-01
In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).
Percutaneous puncture of renal calyxes guided by a novel device coupled with ultrasound
Chan, Chen Jen; Srougi, Victor; Tanno, Fabio Yoshiaki; Jordão, Ricardo Duarte; Srougi, Miguel
2015-01-01
ABSTRACT Purpose: To evaluate the efficiency of a novel device coupled with ultrassound for renal percutaneous puncture. Materials and Methods: After establishing hydronephrosis, ten pigs had three calyxes of each kidney punctured by the same urology resident, with and without the new device (“Punctiometer”). Time for procedure completion, number of attempts to reach the calyx, puncture precision and puncture complications were recorded in both groups and compared. Results: Puncture success on the first attempt was achieved in 25 punctures (83%) with the Punctiometer and in 13 punctures (43%) without the Punctiometer (p=0.011). The mean time required to perform three punctures in each kidney was 14.5 minutes with the Punctiometer and 22.4 minutes without the Punctiometer (p=0.025). The only complications noted were renal hematomas. In the Punctiometer group, all kidneys had small hematomas. In the no Punctiometer group 80% had small hematomas, 10% had a medium hematoma and 10% had a big hematoma. There was no difference in complications between both groups. Conclusions: The Punctiometer is an effective device to increase the likelihood of an accurate renal calyx puncture during PCNL, with a shorter time required to perform the procedure. PMID:26689521
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called "causal independence"). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to O(k log(k)2) and the space to O(k log(k)) where k is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions.
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Wang, Cheng-Wei; He, Hong-Bo; Li, Ning; Wen, Qian; Liu, Zhi-Shun
2010-09-01
To probe into a better therapeutic method for functional constipation. Ninety-five cases of functional constipation were randomly divided into deep puncture at ST 25 group (48 cases), shallow puncture at ST 25 group (24 cases) and medication group (23 cases). In deep puncture at ST 25 group, Tianshu (ST 25) was punctured deeply to the peritoneum, with electric stimulation. In shallow puncture at ST 25 group, Tianshu (ST 25) was punctured shallowly, 5 mm beneath the skin, with electric stimulation. In medication group, Duphalac was administered orally. These cases were treated continuously for 4 weeks in 3 groups and followed up for 6 months. It was to observe the numbers of person who had defecation 4 times a week, difference in weekly defecation frequency and the difference in the Cleveland Clinic Score (CCS). In deep puncture at ST 25 group, the frequency of weekly defecation and the numbers of person who had defecation 4 times a week increased and CCS decreased, which were similar to the efficacy in shallow puncture at ST 25 group (all P > 0.05). But the efficacy of both ST 25 groups was superior to that in medication group (both P < 0.05). In comparison, the deep puncture at ST 25 group acted more quickly than either shallow puncture at ST 25 group or medication group and its efficacy remained much longer. The deep puncture at ST 25 with electric stimulation presents similar efficacy on functional constipation as shallow puncture at ST 25, but it acts more quickly than shallow puncture at ST 25, both of them are more advantageous than medication and the long-term efficacy is better.
The Tension and Puncture Properties of HDPE Geomembrane under the Corrosion of Leachate.
Xue, Qiang; Zhang, Qian; Li, Zhen-Ze; Xiao, Kai
2013-09-17
To investigate the gradual failure of high-density polyethylene (HDPE) geomembrane as a result of long-term corrosion, four dynamic corrosion tests were conducted at different temperatures and durations. By combining tension and puncture tests, we systematically studied the variation law of tension and puncture properties of the HDPE geomembrane under different corrosion conditions. Results showed that tension and puncture failure of the HDPE geomembrane was progressive, and tensile strength in the longitudinal grain direction was evidently better than that in the transverse direction. Punctures appeared shortly after puncture force reached the puncture strength. The tensile strength of geomembrane was in inversely proportional to the corrosion time, and the impact of corrosion was more obvious in the longitudinal direction than transverse direction. As corrosion time increased, puncture strength decreased and corresponding deformation increased. As with corrosion time, the increase of corrosion temperature induced the decrease of geomembrane tensile strength. Tensile and puncture strength were extremely sensitive to temperature. Overall, residual strength had a negative correlation with corrosion time or temperature. Elongation variation increased initially and then decreased with the increase in temperature. However, it did not show significant law with corrosion time. The reduction in puncture strength and the increase in puncture deformation had positive correlations with corrosion time or temperature. The geomembrane softened under corrosion condition. The conclusion may be applicable to the proper designing of the HDPE geomembrane in landfill barrier system.
Semi-analytical approach to estimate railroad tank car shell puncture
DOT National Transportation Integrated Search
2011-03-16
This paper describes the development of engineering-based equations to estimate the puncture resistance of railroad tank cars under a generalized shell or side impact scenario. Resistance to puncture is considered in terms of puncture velocity, which...
Transforaminal Lumbar Puncture: An Alternative Technique in Patients with Challenging Access.
Nascene, D R; Ozutemiz, C; Estby, H; McKinney, A M; Rykken, J B
2018-05-01
Interlaminar lumbar puncture and cervical puncture may not be ideal in all circumstances. Recently, we have used a transforaminal approach in selected situations. Between May 2016 and December 2017, twenty-six transforaminal lumbar punctures were performed in 9 patients (25 CT-guided, 1 fluoroscopy-guided). Seven had spinal muscular atrophy and were referred for intrathecal nusinersen administration. In 2, CT myelography was performed via transforaminal lumbar puncture. The lumbar posterior elements were completely fused in 8, and there was an overlying abscess in 1. The L1-2 level was used in 2; the L2-3 level, in 10; the L3-4 level, in 12; and the L4-5 level, in 2 procedures. Post-lumbar puncture headache was observed on 4 occasions, which resolved without blood patching. One patient felt heat and pain at the injection site that resolved spontaneously within hours. One patient had radicular pain that resolved with conservative treatment. Transforaminal lumbar puncture may become an effective alternative to classic interlaminar lumbar puncture or cervical puncture. © 2018 by American Journal of Neuroradiology.
Marts, Donna J.; Barker, Stacey G.; Wowczuk, Andrew; Vellenoweth, Thomas E.
2002-01-01
A portable barrier strip having retractable tire-puncture spikes for puncturing a vehicle tire. The tire-puncture spikes have an armed position for puncturing a tire and a retracted position for not puncturing a tire. The strip comprises a plurality of barrier blocks having the tire-puncture spikes removably disposed in a shaft that is rotatably disposed in each barrier block. The plurality of barrier blocks hare hingedly interconnected by complementary hinges integrally formed into the side of each barrier block which allow the strip to be rolled for easy storage and retrieval, but which prevent irregular or back bending of the strip. The shafts of adjacent barrier blocks are pivotally interconnected via a double hinged universal joint to accommodate irregularities in a roadway surface and to transmit torsional motion of the shaft from block to block. A single flexshaft cable is connected to the shaft of an end block to allow a user to selectively cause the shafts of a plurality of adjacently connected barrier blocks to rotate the tire-puncture spikes to the armed position for puncturing a vehicle tire, and to the retracted position for not puncturing the tire. The flexshaft is provided with a resiliently biased retracting mechanism, and a release latch for allowing the spikes to be quickly retracted after the intended vehicle tire is punctured.
Design and Development of Basic Physical Layer WiMAX Network Simulation Models
2009-01-01
Wide Web . The third software version was developed during the period of 22 August to 4 November, 2008. The software version developed during the...researched on the Web . The mathematics of some fundamental concepts such as Fourier transforms, convolutional coding techniques were also reviewed...Mathworks Matlab users’ website. A simulation model was found, entitled Estudio y Simulacion de la capa Jisica de la norma 802.16 ( Sistema WiMAX) developed
Hatfield, Malcolm K; Handrich, Stephen J; Willis, Jeffrey A; Beres, Robert A; Zaleski, George X
2008-06-01
The objective of our study was to compare the incidence of blood patch as the best objective indicator of postdural puncture headache after elective fluoroscopic lumbar puncture with the use of a 22-gauge Whitacre (pencil point) needle versus standard 22- and 20-gauge Quincke (bevel-tip) needles and to determine the best level of puncture. The records of 724 consecutive patients who were referred to St. Mary's Medical Center department of radiology for fluoroscopic lumbar puncture from January 2003 through April 2007 were retrospectively reviewed. Emergency requests (191) were discarded along with those for patients with clinical signs of pseudotumor cerebri (21), normal pressure hydrocephalus (3), and failed attempts (4). The collective total was 505 elective lumbar punctures. The blood patch rate for the 22-gauge Whitacre needle was 4.2%. The result for the 22-gauge Quincke point needle was 15.1% whereas that for the 20-gauge Quincke point needle was 29.6%. In addition, the level of puncture showed a blood patch rate that increased as the level of lumbar puncture lowered. The highest level of lumbar puncture was L1-L2 with the lowest recorded level being L5-S1. The Whitacre needle is associated with a significantly lower incidence of blood patch rate after lumbar puncture. The highest level of puncture (L1-L2) also provides the lowest level of blood patch rate.
The Tension and Puncture Properties of HDPE Geomembrane under the Corrosion of Leachate
Xue, Qiang; Zhang, Qian; Li, Zhen-Ze; Xiao, Kai
2013-01-01
To investigate the gradual failure of high-density polyethylene (HDPE) geomembrane as a result of long-term corrosion, four dynamic corrosion tests were conducted at different temperatures and durations. By combining tension and puncture tests, we systematically studied the variation law of tension and puncture properties of the HDPE geomembrane under different corrosion conditions. Results showed that tension and puncture failure of the HDPE geomembrane was progressive, and tensile strength in the longitudinal grain direction was evidently better than that in the transverse direction. Punctures appeared shortly after puncture force reached the puncture strength. The tensile strength of geomembrane was in inversely proportional to the corrosion time, and the impact of corrosion was more obvious in the longitudinal direction than transverse direction. As corrosion time increased, puncture strength decreased and corresponding deformation increased. As with corrosion time, the increase of corrosion temperature induced the decrease of geomembrane tensile strength. Tensile and puncture strength were extremely sensitive to temperature. Overall, residual strength had a negative correlation with corrosion time or temperature. Elongation variation increased initially and then decreased with the increase in temperature. However, it did not show significant law with corrosion time. The reduction in puncture strength and the increase in puncture deformation had positive correlations with corrosion time or temperature. The geomembrane softened under corrosion condition. The conclusion may be applicable to the proper designing of the HDPE geomembrane in landfill barrier system. PMID:28788321
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
NASA Astrophysics Data System (ADS)
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
Upper bounds on sequential decoding performance parameters
NASA Technical Reports Server (NTRS)
Jelinek, F.
1974-01-01
This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.
Rotation invariant deep binary hashing for fast image retrieval
NASA Astrophysics Data System (ADS)
Dai, Lai; Liu, Jianming; Jiang, Aiwen
2017-07-01
In this paper, we study how to compactly represent image's characteristics for fast image retrieval. We propose supervised rotation invariant compact discriminative binary descriptors through combining convolutional neural network with hashing. In the proposed network, binary codes are learned by employing a hidden layer for representing latent concepts that dominate on class labels. A loss function is proposed to minimize the difference between binary descriptors that describe reference image and the rotated one. Compared with some other supervised methods, the proposed network doesn't have to require pair-wised inputs for binary code learning. Experimental results show that our method is effective and achieves state-of-the-art results on the CIFAR-10 and MNIST datasets.
Fan, Guoxin; Wang, Teng; Hu, Shuo; Guan, Xiaofei; Gu, Xin; He, Shisheng
2017-05-01
Accurate puncture during percutaneous transforaminal endoscopic discectomy at the L5/S1 level in cases with high iliac crest and narrow foramen were difficult, even though the difficulties of foraminoplasty could be overcome by advanced instruments like reamers. The report aimed to describe an isocentric navigation technique with a definite pathway in difficult puncture cases at the L5/S1 level. Technical note. Difficult punctures were defined as over 10 punctures of the needle before obtaining an ideal puncture location by senior surgeons with experience of over 500 percutaneous endoscopic transforaminal discectomy (PETD) cases. A total of 124 punctures were recorded in 11 difficult puncture cases at the L5/S1 level. A definite pathway was created by an isocentric navigation theory, which was based on a surface locator and an arch-guided device. The surface locator was used to rapidly and accurately identify the puncture target with the recognition of the surrounding rods under fluoroscopy. The arch-guided device can ensure that the puncture target always remains at the center of a virtual sphere. We recorded the puncture times, fluoroscopy exposure times, radiation exposure time, operative time, visual analog scale (VAS) score, Japanese Orthopeadic Association (JOA) score, and patient satisfaction. The average puncture times were significantly reduced to 1.27 with the arch-guided device compared with conventional puncture methods (P < 0.05). The average operative time was 90.09 ± 11.00 minutes and the fluoroscopy times were 53.36 ± 5.85. The radiation exposure time was 50.91 ± 5.20 seconds. VAS score of leg and back pain, as well as JOA score, were all significantly improved after surgery (P < 0.05). The excellent and good rate of satisfaction was 90.91%. No major complications, including cerebral fluid leakage, surgical infection, and postoperative nerve root injury, were recorded in this small sample. This was a small-sample study with a short follow-up. The novel isocentric navigation technique with a definite pathway is practical and effective in reducing puncture times among difficult puncture cases at the L5/S1 level, which may contribute to the capacity of PETD at the L5/S1 level.
DOT National Transportation Integrated Search
2001-11-01
This report is the second in a series focusing on methods to determine the puncture velocity of railroad tank car shells. In this context, puncture velocity refers to the impact velocity at which a coupler will completely pierce the shell and punctur...
DOT National Transportation Integrated Search
2001-11-01
This report is the first in a two-part series that focuses on methodologies to determine the puncture velocity of tank car shells. In this context, puncture velocity refers to the impact velocity at which a coupler will puncture the tank. In this rep...
MRimaging findings after ventricular puncture in patients with SAH.
Tominaga, J; Shimoda, M; Oda, S; Kumasaka, A; Yamazaki, K; Tsugane, R
2001-11-01
Using magnetic resonance (MR) imaging, we studied brain injury from ventricular puncture performed during craniotomy in the acute stage of subarachnoid hemorrhage (SAH). 80 patients underwent craniotomy for aneurysm obliteration within 48 hr after SAH, ventricular puncture for drainage of cerebrospinal fluid (CSF) was performed to reduce intracranial pressure. MR imaging was performed within 3 days following surgery to measure the size of the lesion, and was repeated on postoperative days 14 and 30. Of the 80 patients with ventricular puncture preceding craniotomy, 65 (81%) showed MR evidence of brain injury from the puncture. Overall, 149 lesions were detected. According to coronal images, cortical injuries (54 cases), penetrating injury to tracts along the ventricular tube (55 cases), caudate injury (25 cases), and corpus callosum injury (15 cases). Brain injuries from ventricular puncture did not correlate significantly to patient outcome. While ventricular puncture and drainage of CSF can readily be performed to decrease brain volume at the time of craniotomy in acute-stage SAH, neurosurgeons should be aware of a surprisingly high incidence of brain injury complicating puncture.
Concatenated Coding Using Trellis-Coded Modulation
NASA Technical Reports Server (NTRS)
Thompson, Michael W.
1997-01-01
In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.
A Non Local Electron Heat Transport Model for Multi-Dimensional Fluid Codes
NASA Astrophysics Data System (ADS)
Schurtz, Guy
2000-10-01
Apparent inhibition of thermal heat flow is one of the most ancient problems in computational Inertial Fusion and flux-limited Spitzer-Harm conduction has been a mainstay in multi-dimensional hydrodynamic codes for more than 25 years. Theoretical investigation of the problem indicates that heat transport in laser produced plasmas has to be considered as a non local process. Various authors contributed to the non local theory and proposed convolution formulas designed for practical implementation in one-dimensional fluid codes. Though the theory, confirmed by kinetic calculations, actually predicts a reduced heat flux, it fails to explain the very small limiters required in two-dimensional simulations. Fokker-Planck simulations by Epperlein, Rickard and Bell [PRL 61, 2453 (1988)] demonstrated that non local effects could lead to a strong reduction of heat flow in two dimensions, even in situations where a one-dimensional analysis suggests that the heat flow is nearly classical. We developed at CEA/DAM a non local electron heat transport model suitable for implementation in our two-dimensional radiation hydrodynamic code FCI2. This model may be envisionned as the first step of an iterative solution of the Fokker-Planck equations; it takes the mathematical form of multigroup diffusion equations, the solution of which yields both the heat flux and the departure of the electron distribution function to the Maxwellian. Although direct implementation of the model is straightforward, formal solutions of it can be expressed in convolution form, exhibiting a three-dimensional tensor propagator. Reduction to one dimension retrieves the original formula of Luciani, Mora and Virmont [PRL 51, 1664 (1983)]. Intense magnetic fields may be generated by thermal effects in laser targets; these fields, as well as non local effects, will inhibit electron conduction. We present simulations where both effects are taken into account and shortly discuss the coupling strategy between them.
NASA Astrophysics Data System (ADS)
Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, A. C. M.; Fonseca, Jaime C.; Lima, Estevão.; Vilaça, João. L.
2014-03-01
Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.
Skin Punctures in Preterm Infants in the First 2 Weeks of Life.
Finn, Daragh; Butler, Daryl; Sheehan, Orla; Livingstone, Vicki; Dempsey, Eugene M
2018-05-23
The objective of this study was to investigate frequency and trends of skin punctures in preterm infants. A prospective audit of preterm infants less than 35 weeks admitted over a 6-month period to a tertiary neonatal intensive care unit. Each skin puncture performed in the first 2 weeks of life was documented in a specifically designed audit sheet. Ninety-nine preterm infants were enrolled. Infants born at < 32 weeks' gestation had significantly more skin punctures than infants > 32 weeks (median skin punctures 26.5 vs. 17, p -value < 0.05). The highest frequency of skin punctures occurred during the first week of life for infants > 28 weeks' gestation (medians 17.5 in 28-31 + 6 weeks' gestation, and 15 in > 32 weeks), and during the second week of life for those born at < 28 weeks (median 17.5). Infants with sepsis had more skin punctures ( p -value < 0.001), but this was not significant on multivariate analysis. Median skin punctures in the second week of life were statistically higher in the sepsis group on multivariate analysis (odds ratio: 1.07, 95% confidence interval: 1.00-1.14, p = 0.041). Frequency of skin punctures is influenced by gestational age and postnatal age. Skin punctures were not an independent risk factor for sepsis. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Reina, M A; López, A; Villanueva, M C; De Andrés, J A; Martín, S
2005-05-01
To assess the possibility of puncturing nerve roots in the cauda equina with spinal needles with different point designs and to quantify the number of axons affected. We performed in vitro punctures of human nerve roots taken from 3 fresh cadavers. Twenty punctures were performed with 25-gauge Whitacre needles and 40 with 25-gauge Quincke needles; half the Quincke needle punctures were carried out with the point perpendicular to the root and the other half with the point parallel to it. The samples were studied by optical and scanning electron microscopy. The possibility of finding the needle orifece inserted inside the nerve was assessed. On a photographic montage, we counted the number of axons during a hypothetical nerve puncture. Nerve roots used in this study were between 1 and 2.3 mm thick, allowing the needle to penetrate the root in the 52 samples studied. The needle orifice was never fully located inside the nerve in any of the samples. The numbers of myelinized axons affected during nerve punctures 0.2 mm deep were 95, 154, and 81 for Whitacre needles, Quincke needles with the point held perpendicular, or the same needle type held parallel, respectively. During punctures 0.5 mm deep, 472, 602, and 279 were affected for each puncture group, respectively. The differences in all cases were statistically significant. It is possible to achieve intraneural puncture with 25-gauge needles. However, full intraneural placement of the orifice of the needle is unlikely. In case of nerve trauma, the damage could be greater if puncture is carried out with a Quincke needle with the point inserted perpendicular to the nerve root.
Chen, Jin-feng; Liu, Yi-nan; Wu, Nan; Feng, Yuan; Wang, Jia; Lü, Chao; Wang, Yu-zhao; Pei, Yu-quan; Yan, Shi; Zheng, Qing-feng; Zhang, Li-jian; Yang, Yue
2012-04-01
To investigate the diagnostic accuracy of needle puncture biopsy and pathological examination of frozen during operation for pulmonary nodules, and whether this diagnostic method can replace tumor resection examination. Totally 50 patients (28 males and 22 females, average age was 59 years) who had the single nodule after imaging examination without any pathological diagnostic from January to October 2010 were selected in this research work. During open operation or video assisted thoracic surgery, needle (14 G model) was used to puncture biopsy for pathological examination of frozen. All the adverse events during puncture biopsy would be recorded. The resection specimens would be accepted paraffin pathological examination. The relationship between puncture frozen pathological and paraffin pathological examination was analyzed. All tumor sizes were ranged from 1.0 cm × 0.6 cm to 5.6 cm × 9.0 cm. The paraffin pathological examination after operation as the golden standard, there were 7 cases of benign tumor and 43 cases of malignant tumor. The diagnostic sensitivity of puncture biopsy was 90.7%, the specificity was 100%, the positive predictive value was 100% and the negative predictive value was 63.6%. There were 11 cases of benign tumor diagnosed by needle puncture biopsy, among which 4 cases were proved as malignant tumor by paraffin pathology, and the false negative rate was 9.3%. The main risk of puncture biopsy was bleeding after puncture immediately, and the rate was 4.0% (2/50). The puncture biopsy during operation had a high specificity for malignant lung tumor, and there was a certain false negative rate for benign tumor. Puncture biopsy and pathological examination of frozen tissue can replace tumor section biopsy in a way.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.
Ren, Shaoqing; He, Kaiming; Girshick, Ross; Sun, Jian
2017-06-01
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.
Low inductance diode design of the Proto 2 accelerator for imploding plasma loads
NASA Astrophysics Data System (ADS)
Hsing, W. W.; Coats, R.; McDaniel, D. H.; Spielman, R. B.
A new water transmission line convolute, single piece insulator, and double accelerator. The water transmission lines have a 5 cm gap to eliminate any water arcing. A two-dimensional magnetic field code was used to calculate the convolute inductance. An acrylic insulator was used as well as a single piece, laminated polycarbonate insulator. They have been successfully tested at over 90% of the Shipman criteria for classical insulator breakdown, although the laminations in the polycarbonate insulator failed after a few shots. The anode and cathode each have two pieces and are held together mechanically. The vacuum MITL tapers to a 3 mm minimum gap. The total inductance is 8.4 nH for gas puff loads and 7.8 nH for imploding foil loads. Out of a forward-going energy of 290 kJ, 175 kJ has been delivered past the insulator, and 100 kJ has been successfully delivered to the load.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaogang; De Carlo, Francesco; Phatak, Charudatta
This paper presents an algorithm to calibrate the center-of-rotation for X-ray tomography by using a machine learning approach, the Convolutional Neural Network (CNN). The algorithm shows excellent accuracy from the evaluation of synthetic data with various noise ratios. It is further validated with experimental data of four different shale samples measured at the Advanced Photon Source and at the Swiss Light Source. The results are as good as those determined by visual inspection and show better robustness than conventional methods. CNN has also great potential forreducing or removingother artifacts caused by instrument instability, detector non-linearity,etc. An open-source toolbox, which integratesmore » the CNN methods described in this paper, is freely available through GitHub at tomography/xlearn and can be easily integrated into existing computational pipelines available at various synchrotron facilities. Source code, documentation and information on how to contribute are also provided.« less
Continuous thermographic observation may predict extravasation in chemotherapy-treated patients.
Oya, Maiko; Murayama, Ryoko; Oe, Makoto; Yabunaka, Koichi; Tanabe, Hidenori; Takahashi, Toshiaki; Matsui, Yuko; Otomo, Eiko; Komiyama, Chieko; Sanada, Hiromi
2017-06-01
Extravasation, or leakage of vesicant drugs into subcutaneous tissues, causes serious complications such as induration and necrosis in chemotherapy-treated patients. As macroscopic observation may overlook symptoms during infusion, we focused on skin temperature changes at puncture sites and studied thermographic patterns related to induration or necrosis caused by extravasation. Outpatients undergoing chemotherapy using peripheral intravenous catheters were enrolled in this prospective observational study. We filmed and classified infrared thermography movies of puncture sites during infusion; ultrasonography was also utilized at puncture sites to observe the subcutaneous condition. Multiple logistic regression analysis was performed to examine the association of thermographic patterns with induration or necrosis observed on the next chemotherapy day. Differences in patient characteristics, puncture sites, and infusions were analyzed by Mann-Whitney's U test and Fisher's exact test according to thermographic patterns. Eight patients developed induration among 74 observations in 62 patients. Among six thermographic patterns, a fan-shaped lower temperature area gradually spreading from the puncture site (fan at puncture site) was significantly associated with induration. Ultrasonography revealed that catheters of patients with fan at puncture site remained in the vein at the end of infusion, indicating that the infusion probably leaked from the puncture site. Patients with fan at puncture site had no significant differences in characteristics and infusion conditions compared with those with the other five thermographic patterns. We determined that fan at puncture site was related to induration caused by extravasation. Continuous thermographic observation may enable us to predict adverse events of chemotherapy. Copyright © 2017. Published by Elsevier Ltd.
The training and learning process of transseptal puncture using a modified technique.
Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom
2013-12-01
As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.
Ultrasound-guided lumbar puncture in pediatric patients: technical success and safety.
Pierce, David B; Shivaram, Giri; Koo, Kevin S H; Shaw, Dennis W W; Meyer, Kirby F; Monroe, Eric J
2018-06-01
Disadvantages of fluoroscopically guided lumbar puncture include delivery of ionizing radiation and limited resolution of incompletely ossified posterior elements. Ultrasound (US) allows visualization of critical soft tissues and the cerebrospinal fluid (CSF) space without ionizing radiation. To determine the technical success and safety of US-guided lumbar puncture in pediatric patients. A retrospective review identified all patients referred to interventional radiology for lumbar puncture between June 2010 and June 2017. Patients who underwent lumbar puncture with fluoroscopic guidance alone were excluded. For the remaining procedures, technical success and procedural complications were assessed. Two hundred and one image-guided lumbar punctures in 161 patients were included. Eighty patients (43%) had previously failed landmark-based attempts. One hundred ninety-six (97.5%) patients underwent lumbar puncture. Five procedures (2.5%) were not attempted after US assessment, either due to a paucity of CSF or unsafe window for needle placement. Technical success was achieved in 187 (95.4%) of lumbar punctures attempted with US guidance. One hundred seventy-seven (90.3%) were technically successful with US alone (age range: 2 days-15 years, weight range: 1.9-53.1 kg) and an additional 10 (5.1%) were successful with US-guided thecal access and subsequent fluoroscopic confirmation. Three (1.5%) cases were unsuccessful with US guidance but were subsequently successful with fluoroscopic guidance. Of the 80 previously failed landmark-based lumbar punctures, 77 (96.3%) were successful with US guidance alone. There were no reported complications. US guidance is safe and effective for lumbar punctures and has specific advantages over fluoroscopy in pediatric patients.
Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen
2016-11-01
Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.
... support for only a very short period of time. Alternative Names Needle cricothyrotomy Images Emergency airway puncture Cricoid cartilage Emergency airway puncture - series References Hebert RB, Bose S, Mace SE. Cricothyrotomy and ...
QR code optical encryption using spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.
2017-02-01
Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129 × 129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.
LDPC-PPM Coding Scheme for Optical Communication
NASA Technical Reports Server (NTRS)
Barsoum, Maged; Moision, Bruce; Divsalar, Dariush; Fitz, Michael
2009-01-01
In a proposed coding-and-modulation/demodulation-and-decoding scheme for a free-space optical communication system, an error-correcting code of the low-density parity-check (LDPC) type would be concatenated with a modulation code that consists of a mapping of bits to pulse-position-modulation (PPM) symbols. Hence, the scheme is denoted LDPC-PPM. This scheme could be considered a competitor of a related prior scheme in which an outer convolutional error-correcting code is concatenated with an interleaving operation, a bit-accumulation operation, and a PPM inner code. Both the prior and present schemes can be characterized as serially concatenated pulse-position modulation (SCPPM) coding schemes. Figure 1 represents a free-space optical communication system based on either the present LDPC-PPM scheme or the prior SCPPM scheme. At the transmitting terminal, the original data (u) are processed by an encoder into blocks of bits (a), and the encoded data are mapped to PPM of an optical signal (c). For the purpose of design and analysis, the optical channel in which the PPM signal propagates is modeled as a Poisson point process. At the receiving terminal, the arriving optical signal (y) is demodulated to obtain an estimate (a^) of the coded data, which is then processed by a decoder to obtain an estimate (u^) of the original data.
Kameoka, S; Matsumoto, K; Kai, Y; Yonehara, Y; Arai, Y; Honda, K
2010-01-01
The aim of the report was to establish puncture techniques for the temporomandibular joint (TMJ) cavity in rats. The experimental sample comprised 30 male Sprague–Dawley rats. Under general anaesthesia the superior joint cavity of the rat was punctured either laterally (lateral puncture technique (LPT), n = 11), anteriorly (anterosuperior puncture technique (ASPT), n = 13) or anteroinferior puncture technique (AIPT), n = 6) using a 27-gauge needle. After the tip of the needle was confirmed by micro-CT (R-mCT®, Rigaku, Tokyo, Japan) located on the mandibular fossa, 0.05 ml of contrast media was injected under micro-CT fluoroscopic guidance. After confirmation that the joint cavity was filled with contrast media, micro-CT imaging was carried out. The puncture for LPT was accurate in 5 of the 11 animals. The ASPT was accurate in all 13 animals. The AIPT punctured 3 of the 6 animals. Furthermore, the ASPT and AIPT demonstrated improved preservation of the needle; it was harder to detach the needle, which led to greater stability. These results suggest that ASPT assisted by R-mCT® is useful for basic research, including drug discovery and pathogenesis of TMJ diseases. PMID:20841463
Lumbar puncture opening pressure is not a reliable measure of intracranial pressure in children.
Cartwright, Cathy; Igbaseimokumo, Usiakimi
2015-02-01
There is very little data correlating lumbar puncture pressures to formal intracranial pressure monitoring despite the widespread use of both procedures. The hypothesis was that lumbar puncture is a single-point measurement and hence it may not be a reliable evaluation of intracranial pressure. The study was therefore carried out to compare lumbar puncture opening pressures with the Camino bolt intracranial pressure monitor in children. Twelve children with a mean age of 8.5 years who had both lumbar puncture and intracranial pressure monitoring were analyzed. The mean lumbar puncture opening pressure was 22.4 mm Hg versus a mean Camino bolt intracranial pressure of 7.8 mm Hg (P < .0001). Lumbar puncture therefore significantly overestimates the intracranial pressure in children. There were no complications from the intracranial pressure monitoring, and the procedure changed the treatment of all 12 children avoiding invasive operative procedures in most of the patients. © The Author(s) 2014.
Optimizing the Galileo space communication link
NASA Technical Reports Server (NTRS)
Statman, J. I.
1994-01-01
The Galileo mission was originally designed to investigate Jupiter and its moons utilizing a high-rate, X-band (8415 MHz) communication downlink with a maximum rate of 134.4 kb/sec. However, following the failure of the high-gain antenna (HGA) to fully deploy, a completely new communication link design was established that is based on Galileo's S-band (2295 MHz), low-gain antenna (LGA). The new link relies on data compression, local and intercontinental arraying of antennas, a (14,1/4) convolutional code, a (255,M) variable-redundancy Reed-Solomon code, decoding feedback, and techniques to reprocess recorded data to greatly reduce data losses during signal acquisition. The combination of these techniques will enable return of significant science data from the mission.
Performance of noncoherent MFSK channels with coding
NASA Technical Reports Server (NTRS)
Butman, S. A.; Lyon, R. F.
1974-01-01
Computer simulation of data transmission over a noncoherent channel with predetection signal-to-noise ratio of 1 shows that convolutional coding can reduce the energy requirement by 4.5 dB at a bit error rate of 0.001. The effects of receiver quantization and choice of number of tones are analyzed; nearly optimum performance is attained with eight quantization levels and sixteen tones at predetection S/N ratio of 1. The effects of changing predetection S/N ratio are also analyzed; for lower predetection S/N ratio, accurate extrapolations can be made from the data, but for higher values, the results are more complicated. These analyses will be useful in designing telemetry systems when coherence is limited by turbulence in the signal propagation medium or oscillator instability.
... into the wound during a puncture, along with dirt and debris from the object. All puncture wounds ... object, such as a rusty nail, the more dirt and debris are dragged into the wound, increasing ...
Congleton, J.L.; LaVoie, W.J.
2001-01-01
Thirteen blood chemistry indices were compared for samples collected by three commonly used methods: caudal transection, heart puncture, and caudal vessel puncture. Apparent biases in blood chemistry values for samples obtained by caudal transection were consistent with dilution with tissue fluids: alanine aminotransferase (ALT), aspartate aminotransferase (AST), lactate dehydrogenase (LDH), creatine kinase (CK), triglyceride, and K+ were increased and Na+ and Cl- were decreased relative to values for samples obtained by caudal vessel puncture. Some enzyme activities (ALT, AST, LDH) and K+ concentrations were also greater in samples taken by heart puncture than in samples taken by caudal vessel puncture. Of the methods tested, caudal vessel puncture had the least effect on blood chemistry values and should be preferred for blood chemistry studies on juvenile salmonids.
1977-09-01
to state as successive input bits are brought into the encoder. We can more easily follow our progress on the equivalent lattice diagram where...Pg.Pj.. STATE DIAGRAM INPUT PATH i ,i.,i ,L.. = 1001 1’ 2’’ V Fig. 12. Convolutional Encoder, State Diagram and Lattice . 39 represented...and can in fact be traced. The Viterbi algorithm can be simply described with the aid of this lattice . Note that the nodes of the lattice represent
Management of pedal puncture wounds.
Belin, Ronald; Carrington, Scott
2012-07-01
Puncture wounds of the foot are a common injury, and infection associated with these injuries may result in considerable morbidity. The pathophysiology and management of a puncture wound is dependent on the material that punctures the foot, the location and depth of the wound, time to presentation, footwear, and underlying health status of the patient. Puncture wounds should not be treated lightly, so accurate diagnosis, assessment, and treatment are paramount. Early incision and drainage, vaccination, and the use of proper antibiotics can lead to positive outcomes and prevent limb-threatening circumstances. Copyright © 2012 Elsevier Inc. All rights reserved.
Minimizing embedding impact in steganography using trellis-coded quantization
NASA Astrophysics Data System (ADS)
Filler, Tomáš; Judas, Jan; Fridrich, Jessica
2010-01-01
In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.
UNIPIC code for simulations of high power microwave devices
NASA Astrophysics Data System (ADS)
Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze
2009-03-01
In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.
Kim, Meehyoung; Yoon, Haesang
2011-11-01
Even though the use of a 25 gauge or smaller Quincke needle is recommended for spinal anesthesia to reduce post-dural puncture headache in Korea, lumbar puncture in older patients using a 25 gauge or smaller Quincke needle can be difficult. However, most previous studies concerning post-dural puncture headache have chosen children, parturients, and young adults as study participants. The study compared post-dural puncture headache, post-operative back pain, and the number of lumbar puncture attempts using a 23 or 25 gauge Quincke needle for spinal anesthesia of Korean patients >60-years-of-age. Randomized, double-blinded controlled trial. The 53 participants who underwent orthopedic surgery under spinal anesthesia were recruited by informed notices from December 2006 through August 2007 at a 200-bed general hospital located in Kyunggido. Inclusion criteria were an age >60 years, ASA I-II, and administration of patient controlled analgesia for the first 48 h post-operatively. The 53 patients were randomly allocated to either the experimental (23 gauge Quincke needle) or control group (25 gauge Quincke needle). All patients had 24 h bed rest post-operatively. Post-dural puncture headache was assessed by the Dittmann Scale and post-operative back pain was assessed by a visual analogue scale at 24, 48, and 72 h post-operatively. The statistical methods included the Mann-Whitney U-test and Spearman correlation. There were no differences in post-dural puncture headache, and post-operative back pain at 24, 48, and 72 h post-operatively, and no differences in the number of lumbar punctures, with the 23 and 25 gauge Quincke needle. Forty-eight hour post-operative back pain was positively associated with the number of lumbar punctures (p=.036) and age (p=.040). There were no statistically significant associations among post-dural puncture headache, the number of lumbar punctures, and 48 h post-operative back pain. Pre-operative back pain was positively associated with 48 h post-operative back pain (p<.001). The choice of a 23 or 25 gauge Quincke needle for spinal anesthesia has no significant influence on post-dural puncture headache and post-operative back pain for Korean patients greater than 60-years-of-age. The 23 gauge Quincke needle is an option for lumbar punctures in this patient population. Copyright © 2011 Elsevier Ltd. All rights reserved.
Turan, Burak; Daşlı, Tolga; Erkol, Ayhan; Erden, İsmail
2015-01-01
Sublingual (SL) nitroglycerin administered before radial artery puncture can improve cannulation success and decrease the incidence of radial artery spasm (RAS) compared with intra-arterial (IA) nitroglycerin in transradial procedures. Patients undergoing diagnostic transradial angiography were randomized to IA (200 mcg) or SL (400 mcg) nitroglycerin. Primary endpoints were puncture time and puncture attempts. Secondary endpoint was the incidence of RAS. Total of 101 participants (mean age 60±11years, 53% male) were randomized (51 in IA and 50 in SL groups). Puncture time (50 [36-75] vs 50 [35-90] sec), puncture attempts (1.18±0.48 vs 1.20±0.49), multiple punctures (13.7 vs 16.0%) and RAS (19.6 vs 24.0%) were not statistically different between IA vs SL groups respectively. A composite endpoint of all adverse events related to transradial angiography (multiple punctures, RAS, access site crossover, hypotension/bradycardia associated with nitroglycerin and radial artery occlusion) was very similar in IA vs SL groups (39 vs 40%, respectively). However puncture time was significantly longer with SL nitroglycerin in patients <1.65m height (47 [36-66] vs 63 [41-110] sec, p=0.042). Multiple punctures seemed higher with SL nitroglycerin in patients with diabetes (0 vs 30%, p=0.028) or in patients <1.65m height (7.4 vs 25%, p=0.085). Likewise, RAS with SL nitroglycerin seemed more frequent in smokers compared to IA nitroglycerin (0 vs 27%, p=0.089). SL nitroglycerin was not different from IA nitroglycerin in terms of efficiency and safety in overall study population. However it may be inferior to IA nitroglycerin in certain subgroups (shorter individuals, diabetics and smokers). Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wybranski, Christian, E-mail: Christian.Wybranski@uk-koeln.de; Pech, Maciej; Lux, Anke
ObjectiveTo assess the feasibility of a hybrid approach employing MRI-guided bile duct (BD) puncture for subsequent fluoroscopy-guided biliary interventions in patients with non-dilated (≤3 mm) or dilated BD (≥3 mm) but unfavorable conditions for ultrasonography (US)-guided BD puncture.MethodsA total of 23 hybrid interventions were performed in 21 patients. Visualization of BD and puncture needles (PN) in the interventional MR images was rated on a 5-point Likert scale by two radiologists. Technical success, planning time, BD puncture time and positioning adjustments of the PN as well as technical success of the biliary intervention and complication rate were recorded.ResultsVisualization even of third-order non-dilated BDmore » and PN was rated excellent by both radiologists with good to excellent interrater agreement. MRI-guided BD puncture was successful in all cases. Planning and BD puncture times were 1:36 ± 2.13 (0:16–11:07) min. and 3:58 ± 2:35 (1:11–9:32) min. Positioning adjustments of the PN was necessary in two patients. Repeated capsular puncture was not necessary in any case. All biliary interventions were completed successfully without major complications.ConclusionA hybrid approach which employs MRI-guided BD puncture for subsequent fluoroscopy-guided biliary intervention is feasible in clinical routine and yields high technical success in patients with non-dilated BD and/or unfavorable conditions for US-guided puncture. Excellent visualization of BD and PN in near-real-time interventional MRI allows successful cannulation of the BD.« less
78 FR 22213 - Airworthiness Directives; Eurocopter France Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-15
... float assemblies for any cuts, tears, punctures, or abrasion. Replace the cover if the internal... cuts, tears, punctures, or abrasion. If there is a cut, tear, puncture, or any abrasion, repair the...
Li, Xiang; Long, Qingzhi; Chen, Xingfa; He, Dalin; He, Hui
2017-04-01
SonixGPS is a novel real-time ultrasonography navigation technology, which has been demonstrated to promote accuracy of puncture in surgical operations. The aim of this study is to evaluate its application in guiding the puncture during percutaneous nephrolithotomy (PCNL). We retrospectively reviewed our experience in treating a total of 74 patients with complex kidney stones with PCNL, in which puncture in 37 cases were guided by SonixGPS system, while the other 37 by conventional ultrasound. The effectiveness of operation was evaluated in terms of stone clearance rate, operation time, time to successful puncture, number of attempts for successful puncture and hospital stay. The safety of operation was examined by evaluating postoperative complications. Our retrospective review showed that although there were no significant differences in stone clearance rates between the groups, SonixGPS guidance resulted in more puncture accuracy with shorter puncture time and higher successful puncture rate. Under the help of SonixGPS, most patients (92 %) had no or just mild complications, compared to that (73 %) in conventional ultrasound group. Post-operative decrease of hemoglobin in SonixGPS group was 13.79 (7-33) mg/dl, significantly lower than that 20.97 (8-41) mg/dl in conventional ultrasound group. Our experience demonstrates that SonixGPS is superior to conventional ultrasound in guiding the puncture in PCNL for the treatment of complex kidney stone.
Platek, S Frank; Keisler, Mark A; Ranieri, Nicola; Reynolds, Todd W; Crowe, John B
2002-09-01
The ability to accurately determine the number of syringe needle penetration holes through the rubber stoppers in pharmaceutical vials and rubber septa in intravenous (i.v.) line and bag ports has been a critical factor in a number of forensic cases involving the thefts of controlled substances or suspected homicide by lethal injection. In the early 1990s, the microscopy and microanalysis group of the U.S. Food and Drug Administration's Forensic Chemistry Center (FCC) developed and implemented a method (unpublished) to locate needle punctures in rubber pharmaceutical vial stoppers. In 1996, as part of a multiple homicide investigation, the Indiana State Police Laboratory (ISPL) contacted the FCC for information on a method to identify and count syringe needle punctures through rubber stoppers in pharmaceutical vials. In a joint project and investigation using the FCC's needle hole location method and applying a method of puncture site mapping developed by the ISPL, a systematic method was developed to locate, identify, count, and map syringe punctures in rubber bottle stoppers or i.v. bag ports using microscopic analysis. The method requires documentation of punctures on both sides of the rubber stoppers and microscopic analysis of each suspect puncture site. The final result of an analysis using the method is a detailed diagram of puncture holes on both sides of a questioned stopper and a record of the minimum number of puncture holes through a stopper.
Puncture mechanics of soft elastomeric membrane with large deformation by rigid cylindrical indenter
NASA Astrophysics Data System (ADS)
Liu, Junjie; Chen, Zhe; Liang, Xueya; Huang, Xiaoqiang; Mao, Guoyong; Hong, Wei; Yu, Honghui; Qu, Shaoxing
2018-03-01
Soft elastomeric membrane structures are widely used and commonly found in engineering and biological applications. Puncture is one of the primary failure modes of soft elastomeric membrane at large deformation when indented by rigid objects. In order to investigate the puncture failure mechanism of soft elastomeric membrane with large deformation, we study the deformation and puncture failure of silicone rubber membrane that results from the continuous axisymmetric indentation by cylindrical steel indenters experimentally and analytically. In the experiment, effects of indenter size and the friction between the indenter and the membrane on the deformation and puncture failure of the membrane are investigated. In the analytical study, a model within the framework of nonlinear field theory is developed to describe the large local deformation around the punctured area, as well as to predict the puncture failure of the membrane. The deformed membrane is divided into three parts and the friction contact between the membrane and indenter is modeled by Coulomb friction law. The first invariant of the right Cauchy-Green deformation tensor I1 is adopted to predict the puncture failure of the membrane. The experimental and analytical results agree well. This work provides a guideline in designing reliable soft devices featured with membrane structures, which are present in a wide variety of applications.
Beswick, D M; Damrose, E J
2016-07-01
To evaluate the utility of the hybrid tracheoesophageal puncture procedure in stapler-assisted laryngectomy. Patients who underwent total laryngectomy at a single institution from 2009 to 2015 were reviewed. The interventions assessed were surgical creation of a tracheoesophageal puncture and placement of a voice prosthesis. The outcomes measured included voicing ability and valve failure. Thirty-nine patients underwent total laryngectomy or pharyngolaryngectomy. Of these, nine underwent stapler-assisted laryngectomy; seven of the nine patients underwent concurrent stapler-assisted laryngectomy, cricopharyngeal myotomy and a hybrid tracheoesophageal puncture procedure. These seven patients were the focus of this review. Successful voicing and oral alimentation was achieved in all patients. Mean time to phonation was 30 days (range, 7-77 days) and mean time to first valve change was 90 days (range, 35-117 days). Primary tracheoesophageal puncture with concurrent voice prosthesis placement and cricopharyngeal myotomy is easily performed with stapler-assisted laryngectomy. The hybrid tracheoesophageal puncture procedure is a simple method that enables a single operator to achieve primary tracheoesophageal puncture and valve placement; in addition, it facilitates concurrent cricopharyngeal myotomy.
NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge
NASA Technical Reports Server (NTRS)
Saini, Subhash
1995-01-01
Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.
Engineering analyses for railroad tank car head puncture resistance
DOT National Transportation Integrated Search
2006-11-06
This paper describes engineering analyses to estimate the : forces, deformations, and puncture resistance of railroad tank : cars. Different approaches to examine puncture of the tank car : head are described. One approach is semi-empirical equations...
Etienne, A-L; Audigié, F; Peeters, D; Gabriel, A; Busoni, V
2015-04-01
Cisternal puncture in dogs and cats is commonly carried out. This article describes the percutaneous ultrasound anatomy of the cisternal region in the dog and the cat and an indirect technique for ultrasound-guided cisternal puncture. Ultrasound images obtained ex vivo and in vivo were compared with anatomic sections and used to identify the landmarks for ultrasound-guided cisternal puncture. The ultrasound-guided procedure was established in cadavers and then applied in vivo in seven dogs and two cats. The anatomic landmarks for the ultrasound-guided puncture are the cisterna magna, the spinal cord, the two occipital condyles on transverse images, the external occipital crest and the dorsal arch of the first cervical vertebra on longitudinal images. Using these ultrasound anatomic landmarks, an indirect ultrasound-guided technique for cisternal puncture is applicable in the dog and the cat. © 2014 Blackwell Verlag GmbH.
A technique for ultrasound-guided blood sampling from a dry and gel-free puncture area.
Thorn, Sofie; Gopalasingam, Nigopan; Bendtsen, Thomas Fichtner; Knudsen, Lars; Sloth, Erik
2016-05-07
Vein punctures are performed daily to sample blood. Ultrasound (US) offers an alternative to the blind landmark technique for difficult vascular access. A challenge for this procedure is the presence of US gel in the puncture area. We present a technique for US-guided puncture from extremity veins not palpable or visible to the human eye, while keeping the puncture area dry and gel-free. Ten healthy volunteers underwent two US-guided vein punctures from veins that were neither palpable nor visible. One was drawn from an antebrachial vein and another from a brachial vein. A sterile barrier drape was made from a commercially available dressing and a piece of transparent sterile plastic. The barrier drape consists of an adhesive part placed on the skin designed for sonography and a free transparent flap constituting the barrier between the unsterile sonographic site and the sterile gel-free puncture site. The success rate for vein puncture was 100% in both locations. A total of 22 skin punctures were performed (11 antebrachial and 11 brachial). Gain output was increased 7% (4-12%), and 8% (4-15%), respectively, to compensate for attenuation of the US signal due to the drape. Alignment of the centre of the transducer with the long-axis of the target vein during the procedure was reported as a challenge. US-guided blood sampling from a brachial and antebrachial vein was possible with a 100% success rate, while ensuring a dry and gel-free venipuncture area on one side and the transducer on the other side of a sterile barrier.
Enk, D; Enk, E
1995-11-01
Various in vitro models have been introduced for comparative examinations of post-dural-puncture trauma and measurement of liquor leakage through puncture sites. These models allow simulation of subarachnoid, but not of peridural, pressure. A new two-chamber-model realizes the simulation of both subarachnoid and peridural pressure and allows observation of in vitro punctures with video-documentation. Frame grabbing and (computer-aided) image analysis show new aspects of spinal puncture effects. Therefore, post-dural-puncture trauma and retraction can be objectively visualized by this method, which has not previously been demonstrated. Two-chamber-model consists of two short aluminium cylinders. Native human dura patches (8X8 mm) from fresh cadavers are put (correctly oriented) between two special polyamide seals. Mounted between the upper and lower cylinder, these seals stretch the dura patch, which remains flexible and even in all directions. After filling of the lower (subarachnoid) and upper (peridural) chamber with Ringer lactate solution, positive or negative physiological pressure can be adjusted by way of two (Ringer lactate solution filled) infusion lines in each chamber. Puncturing is performed at an angle of 57 degrees to the dura. The model allows examination with epi-illumination and transmitted (polarized) light. In vitro punctures are observed through an inverted camera lens with an CCD-Hi8 video camera (Canon UC1HI) looking into the peridural chamber and documented by means of an S-VHS video recorder (Panasonic NV-FS200EG). After true-colour frame grabbing by a video digitizer (Fast Screen Machine II), single video frames can be optimized and analysed with a 486-66 MHz computer and conventional software (Corel Draw 3.0, Photostyler 1.1a, DDL Aequitas 1.00b). Punctures demonstrated in this paper have been done under simulation of a transdural gradient of 20 cm water similar to the situation of a recumbent patient (15 cm water in the subarachnoid and -5 cm water in the peridural chamber). The punctures were followed by short-time observation for up to 10 minutes. By making it possible to obtains a picture of the puncture site at 20-ms intervals (because of the PAL norm of 50 half-frames/s), video-documentation has become accepted as superior to conventional photography. When the Ringer lactate solution in the subarachnoid chamber is stained with methylene blue, transdural leakage can easily be observed. The result of this documentation technique demonstrate that not dural puncture can be atraumatic, when a 29-G Quincke needle is used. Calculation on the difference between a digitized video frame before and after the puncture clearly illustrates the dural trauma. Owing to their non-cutting tip, as expected, pencil-point needles leave diffuse changes across the dura patch, whereas a more local trauma was observed after puncturing with cutting-tip needles. The same computer calculation between two video frames allows examination of post-puncture-dural retraction of the puncture site. In this connection, we found that relevant dural retraction is a phenomenon limited to the first minute after puncture. Thin spinal needles with so-called modern tips (e.g. Whitacre, Atraucan) can minimize the post-dural-puncture trauma, whereas thicker, conventional, spinal needles (Quincke) leave considerable dural defects. The two-chamber-model presented allows easy simulation of physiological subarachnoid and peridural pressure. The Ringer lactate solution in the subarachnoid chamber corresponds to the liquor, whereas that in the peridural chamber corresponds to the intercellular (peridural) space. The tension of the dural patch between the polyamide seals is similar to the situation in an anotomical model observed by spinaloscopy (in an earlier study). With the video documentation and computer-aided analysis technique introduced, dural trauma and retraction of the puncture site can be examined and demo
Distillation with Sublogarithmic Overhead
NASA Astrophysics Data System (ADS)
Hastings, Matthew B.; Haah, Jeongwan
2018-02-01
It has been conjectured that, for any distillation protocol for magic states for the T gate, the number of noisy input magic states required per output magic state at output error rate ɛ is Ω [log (1 /ɛ )] . We show that this conjecture is false. We find a family of quantum error correcting codes of parameters ⟦ ∑ i =w +1 m (
DNCON2: improved protein contact prediction using two-level deep convolutional neural networks.
Adhikari, Badri; Hou, Jie; Cheng, Jianlin
2018-05-01
Significant improvements in the prediction of protein residue-residue contacts are observed in the recent years. These contacts, predicted using a variety of coevolution-based and machine learning methods, are the key contributors to the recent progress in ab initio protein structure prediction, as demonstrated in the recent CASP experiments. Continuing the development of new methods to reliably predict contact maps is essential to further improve ab initio structure prediction. In this paper we discuss DNCON2, an improved protein contact map predictor based on two-level deep convolutional neural networks. It consists of six convolutional neural networks-the first five predict contacts at 6, 7.5, 8, 8.5 and 10 Å distance thresholds, and the last one uses these five predictions as additional features to predict final contact maps. On the free-modeling datasets in CASP10, 11 and 12 experiments, DNCON2 achieves mean precisions of 35, 50 and 53.4%, respectively, higher than 30.6% by MetaPSICOV on CASP10 dataset, 34% by MetaPSICOV on CASP11 dataset and 46.3% by Raptor-X on CASP12 dataset, when top L/5 long-range contacts are evaluated. We attribute the improved performance of DNCON2 to the inclusion of short- and medium-range contacts into training, two-level approach to prediction, use of the state-of-the-art optimization and activation functions, and a novel deep learning architecture that allows each filter in a convolutional layer to access all the input features of a protein of arbitrary length. The web server of DNCON2 is at http://sysbio.rnet.missouri.edu/dncon2/ where training and testing datasets as well as the predictions for CASP10, 11 and 12 free-modeling datasets can also be downloaded. Its source code is available at https://github.com/multicom-toolbox/DNCON2/. chengji@missouri.edu. Supplementary data are available at Bioinformatics online.
TU-D-209-02: A Backscatter Point Spread Function for Entrance Skin Dose Determination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vijayan, S; Xiong, Z; Shankar, A
Purpose: To determine the distribution of backscattered radiation to the skin resulting from a non-uniform distribution of primary radiation through convolution with a backscatter point spread function (PSF). Methods: A backscatter PSF is determined using Monte Carlo simulation of a 1 mm primary beam incident on a 30 × 30 cm × 20 cm thick PMMA phantom using EGSnrc software. A primary profile is similarly obtained without the phantom and the difference from the total provides the backscatter profile. This scatter PSF characterizes the backscatter spread for a “point” primary interaction and can be convolved with the entrance primary dosemore » distribution to obtain the total entrance skin dose. The backscatter PSF was integrated into the skin dose tracking system (DTS), a graphical utility for displaying the color-coded skin dose distribution on a 3D graphic of the patient during interventional fluoroscopic procedures. The backscatter convolution method was validated for the non-uniform beam resulting from the use of an ROI attenuator. The ROI attenuator is a copper sheet with about 20% primary transmission (0.7 mm thick) containing a circular aperture; this attenuator is placed in the beam to reduce dose in the periphery while maintaining full dose in the region of interest. The DTS calculated primary plus backscatter distribution is compared to that measured with GafChromic film and that calculated using EGSnrc Monte-Carlo software. Results: The PSF convolution method used in the DTS software was able to account for the spread of backscatter from the ROI region to the region under the attenuator. The skin dose distribution determined using DTS with the ROI attenuator was in good agreement with the distributions measured with Gafchromic film and determined by Monte Carlo simulation Conclusion: The PSF convolution technique provides an accurate alternative for entrance skin dose determination with non-uniform primary x-ray beams. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback
NASA Astrophysics Data System (ADS)
Mehlführer, Christian; Caban, Sebastian; Rupp, Markus
2007-12-01
We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.
Point of impact: the effect of size and speed on puncture mechanics.
Anderson, P S L; LaCosse, J; Pankow, M
2016-06-06
The use of high-speed puncture mechanics for prey capture has been documented across a wide range of organisms, including vertebrates, arthropods, molluscs and cnidarians. These examples span four phyla and seven orders of magnitude difference in size. The commonality of these puncture systems offers an opportunity to explore how organisms at different scales and with different materials, morphologies and kinematics perform the same basic function. However, there is currently no framework for combining kinematic performance with cutting mechanics in biological puncture systems. Our aim here is to establish this framework by examining the effects of size and velocity in a series of controlled ballistic puncture experiments. Arrows of identical shape but varying in mass and speed were shot into cubes of ballistic gelatine. Results from high-speed videography show that projectile velocity can alter how the target gel responds to cutting. Mixed models comparing kinematic variables and puncture patterns indicate that the kinetic energy of a projectile is a better predictor of penetration than either momentum or velocity. These results form a foundation for studying the effects of impact on biological puncture, opening the door for future work to explore the influence of morphology and material organization on high-speed cutting dynamics.
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Comparison of Sprotte and Quincke needles with respect to post dural puncture headache and backache.
Tarkkila, P J; Heine, H; Tervo, R R
1992-01-01
The objective of this study was to compare 24-gauge Sprotte and 25-gauge Quincke needles with respect to post dural puncture headache and backache. Three hundred ASA Physical Status I or II patients scheduled for minor orthopedic or urologic operations under spinal anesthesia were chosen for this randomized, prospective study at a university hospital and a city hospital. Anesthetic technique, intravenous fluids, and postoperative pain therapy were standardized. Patients were randomly divided into three equal groups. Spinal anesthesia was performed with either a 24-gauge Sprotte needle or a 25-gauge Quincke needle with the cutting bevel parallel or perpendicular to the dural fibers. Anesthesia could not be performed in three cases with the Sprotte needle and in one case with the Quincke needle. The most common complications were post dural puncture backache (18.0%), post dural puncture headache (8.2%), and non-postural headache (6.7%). No major complications occurred. The Quincke needle with bevel perpendicular to the dural fibers caused a 17.9% incidence of post dural puncture headache. The Quincke with bevel parallel to the dural fibers and the Sprotte needles caused similar post dural puncture headache rates (4.5% and 2.4%, respectively). Other factors associated with post dural puncture headache were young age, early ambulation, and sedation during spinal anesthesia. There were no significant differences between needles in the incidence of post dural puncture backache. Our data indicate that Quincke needles should not be used with the needle bevel inserted perpendicular to the dural fibers. The Sprotte needle does not solve the problem of post dural puncture headache and backache.
Wada, Keizo; Hamada, Daisuke; Tamaki, Shunsuke; Higashino, Kosaku; Fukui, Yoshihiro; Sairyo, Koichi
2017-01-01
Previous studies suggested that changes in kinematics in total knee arthroplasty (TKA) affected satisfaction level. The aim of this cadaveric study was to evaluate the effect of medial collateral ligament (MCL) release by multiple needle puncture on knee rotational kinematics in posterior-stabilized TKA. Six fresh, frozen cadaveric knees were included in this study. All TKA procedures were performed with an image-free navigation system using a 10-mm polyethylene insert. Tibial internal rotation was assessed to evaluate intraoperative knee kinematics. Multiple needle puncturing was performed 5, 10, and 15 times for the hard portion of the MCL at 90° knee flexion. Kinematic analysis was performed after every 5 punctures. After performing 15 punctures, a 14-mm polyethylene insert was inserted, and kinematic analysis was performed. The tibial internal rotation angle at maximum knee flexion without multiple needle puncturing was significantly larger (9.42°) than that after 15 punctures (3°). Negative correlation (Pearson r = -0.715, P < .001) between tibial internal rotation angle at maximum knee flexion and frequency of puncture was observed. The tibial internal rotation angle with a 14-mm insert was significantly larger (7.25°) compared with the angle after 15 punctures. Tibial internal rotation during knee flexion was reduced by extensive MCL release using multiple needle puncturing and was recovered by increasing of medial tightness. From the point of view of knee kinematics, medial tightness should be allowed to maintain the internal rotation angle of the tibia during knee flexion which might lead to patient satisfaction. Copyright © 2016 Elsevier Inc. All rights reserved.
Wasmer, Kristina; Zellerhoff, Stephan; Köbe, Julia; Mönnig, Gerold; Pott, Christian; Dechering, Dirk G; Lange, Philipp S; Frommeyer, Gerrit; Eckardt, Lars
2017-03-01
Transseptal punctures (TSP) are routinely performed in cardiac interventions requiring access to the left heart. While pericardial effusion/tamponade are well-recognized complications, few data exist on accidental puncture of the aorta and its management and outcome. We therefore analysed our single centre database for this complication. We assessed frequency and outcome of inadvertent aortic puncture during TSP in consecutive patients undergoing ablation procedures between January 2005 and December 2014. During the 10-year period, two inadvertent aortic punctures occurred among 2936 consecutive patients undergoing 4305 TSP (0.07% of patients, 0.05% of TSP) and in one Mustard patient during attempted baffle puncture. The first two patients required left ventricular access for catheter ablation of ventricular tachycardia. In both cases, an 11.5F steerable sheath (inner diameter 8.5F) was accidentally placed in the ascending aorta just above the aortic valve. In the presence of surgical standby, the sheaths were pulled back with a wire left in the aorta. Under careful haemodynamic and echocardiographic observation, this wire was also pulled back 30 min later. None of the patients required a closing device or open heart surgery. None of the patients suffered complications from the accidental aortic puncture and sheath placement. Inadvertent aortic puncture and sheath placement are rare complications in patients undergoing TSP for interventional procedures. Leaving a guidewire in place during the observation period may allow introduction of sheaths or other tools in order to control haemodynamic deterioration. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Hamamoto, Shuzo; Unno, Rei; Taguchi, Kazumi; Ando, Ryosuke; Hamakawa, Takashi; Naiki, Taku; Okada, Shinsuke; Inoue, Takaaki; Okada, Atsushi; Kohri, Kenjiro; Yasui, Takahiro
2017-11-01
To evaluate the clinical utility of a new navigation technique for percutaneous renal puncture using real-time virtual sonography (RVS) during endoscopic combined intrarenal surgery. Thirty consecutive patients who underwent endoscopic combined intrarenal surgery for renal calculi, between April 2014 and July 2015, were divided into the RVS-guided puncture (RVS; n = 15) group and the ultrasonography-guided puncture (US; n = 15) group. In the RVS group, renal puncture was repeated until precise piercing of a papilla was achieved under direct endoscopic vision, using the RVS system to synchronize the real-time US image with the preoperative computed tomography image. In the US group, renal puncture was performed under US guidance only. In both groups, 2 urologists worked simultaneously to fragment the renal calculi after inserting the miniature percutaneous tract. The mean sizes of the renal calculi in the RVS and the US group were 33.5 and 30.5 mm, respectively. A lower mean number of puncture attempts until renal access through the calyx was needed for the RVS compared with the US group (1.6 vs 3.4 times, respectively; P = .001). The RVS group had a lower mean postoperative hemoglobin decrease (0.93 vs 1.39 g/dL, respectively; P = .04), but with no between-group differences with regard to operative time, tubeless rate, and stone-free rate. None of the patients in the RVS group experienced postoperative complications of a Clavien score ≥2, with 3 patients experiencing such complications in the US group. RVS-guided renal puncture was effective, with a lower incidence of bleeding-related complications compared with US-guided puncture. Copyright © 2017 Elsevier Inc. All rights reserved.
Convolutional Sparse Coding for RGB+NIR Imaging.
Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon
2018-04-01
Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.
Piano Transcription with Convolutional Sparse Lateral Inhibition
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon
2017-02-08
This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less
Piano Transcription with Convolutional Sparse Lateral Inhibition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon
This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less
EnzyNet: enzyme classification using 3D convolutional neural networks on spatial representation
Amidi, Afshine; Megalooikonomou, Vasileios; Paragios, Nikos
2018-01-01
During the past decade, with the significant progress of computational power as well as ever-rising data availability, deep learning techniques became increasingly popular due to their excellent performance on computer vision problems. The size of the Protein Data Bank (PDB) has increased more than 15-fold since 1999, which enabled the expansion of models that aim at predicting enzymatic function via their amino acid composition. Amino acid sequence, however, is less conserved in nature than protein structure and therefore considered a less reliable predictor of protein function. This paper presents EnzyNet, a novel 3D convolutional neural networks classifier that predicts the Enzyme Commission number of enzymes based only on their voxel-based spatial structure. The spatial distribution of biochemical properties was also examined as complementary information. The two-layer architecture was investigated on a large dataset of 63,558 enzymes from the PDB and achieved an accuracy of 78.4% by exploiting only the binary representation of the protein shape. Code and datasets are available at https://github.com/shervinea/enzynet. PMID:29740518
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
EnzyNet: enzyme classification using 3D convolutional neural networks on spatial representation.
Amidi, Afshine; Amidi, Shervine; Vlachakis, Dimitrios; Megalooikonomou, Vasileios; Paragios, Nikos; Zacharaki, Evangelia I
2018-01-01
During the past decade, with the significant progress of computational power as well as ever-rising data availability, deep learning techniques became increasingly popular due to their excellent performance on computer vision problems. The size of the Protein Data Bank (PDB) has increased more than 15-fold since 1999, which enabled the expansion of models that aim at predicting enzymatic function via their amino acid composition. Amino acid sequence, however, is less conserved in nature than protein structure and therefore considered a less reliable predictor of protein function. This paper presents EnzyNet, a novel 3D convolutional neural networks classifier that predicts the Enzyme Commission number of enzymes based only on their voxel-based spatial structure. The spatial distribution of biochemical properties was also examined as complementary information. The two-layer architecture was investigated on a large dataset of 63,558 enzymes from the PDB and achieved an accuracy of 78.4% by exploiting only the binary representation of the protein shape. Code and datasets are available at https://github.com/shervinea/enzynet.
Estimation of neutron energy distributions from prompt gamma emissions
NASA Astrophysics Data System (ADS)
Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.
2017-11-01
A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.
A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network
Wang, Zhen; Zhuang, Yuan; Yang, Jun; Zhang, Hengfeng; Dong, Wei; Wang, Min; Hua, Luchi; Liu, Bo; Shi, Longxing
2018-01-01
Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC), estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN) is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI) neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification. PMID:29747373
Large-Constraint-Length, Fast Viterbi Decoder
NASA Technical Reports Server (NTRS)
Collins, O.; Dolinar, S.; Hsu, In-Shek; Pollara, F.; Olson, E.; Statman, J.; Zimmerman, G.
1990-01-01
Scheme for efficient interconnection makes VLSI design feasible. Concept for fast Viterbi decoder provides for processing of convolutional codes of constraint length K up to 15 and rates of 1/2 to 1/6. Fully parallel (but bit-serial) architecture developed for decoder of K = 7 implemented in single dedicated VLSI circuit chip. Contains six major functional blocks. VLSI circuits perform branch metric computations, add-compare-select operations, and then store decisions in traceback memory. Traceback processor reads appropriate memory locations and puts out decoded bits. Used as building block for decoders of larger K.
A pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, In-Shek; Shao, H. M.; Reed, Irving S.; Shyu, Hsuen-Chyun
1988-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
NASA Technical Reports Server (NTRS)
Kwatra, S. C.
1998-01-01
A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang
2018-05-17
To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.
1963-01-01
Explorer XVI (1962 Beta Chi l) data that have been analyzed for the period between December 16, 1962 (launch date), and January 13, 1963, indicate that the orbit achieved was close to the predicted orbit. Ten punctures of annealed 0.001-inch-thick beryllium-copper have been used to determine a puncture rate of 0.035 per square foot per day in this material. One puncture of a 0.002-inch-thick sample has also occurred in this period. A tentative evaluation of the puncture rate for the 0.001-inch beryllium-copper in terms of the rate for an equivalent thickness of aluminum has been attempted, and the result has been compared with two different puncture rate estimates. The three micrometeoroid impact detecting systems are operating. Counting rates for the high- and low-sensitivity systems were close to anticipated values near the end of one week. Two of the 0.001-inch-steel-covered grid detectors have been punctured, but none of the 0.003- or 0.006-inch-steel-covered grid detectors have indicated punctures. One of the cadmium sulfide cells indicates three punctures of the 0.00025-inch Mylar cover. None of the 0.002- or 0.003-inch-copper-wire cards have indicated a break in the period covered. Telemetry temperatures were initially higher than expected although they remained well within operating limits. Sensor temperatures have remained within the expected bounds.
Performance of coded MFSK in a Rician fading channel. [Multiple Frequency Shift Keyed modulation
NASA Technical Reports Server (NTRS)
Modestino, J. W.; Mui, S. Y.
1975-01-01
The performance of convolutional codes in conjunction with noncoherent multiple frequency shift-keyed (MFSK) modulation and Viterbi maximum likelihood decoding on a Rician fading channel is examined in detail. While the primary motivation underlying this work has been concerned with system performance on the planetary entry channel, it is expected that the results are of considerably wider interest. Particular attention is given to modeling the channel in terms of a few meaningful parameters which can be correlated closely with the results of theoretical propagation studies. Fairly general upper bounds on bit error probability performance in the presence of fading are derived and compared with simulation results using both unquantized and quantized receiver outputs. The effects of receiver quantization and channel memory are investigated and it is concluded that the coded noncoherent MFSK system offers an attractive alternative to coherent BPSK in providing reliable low data rate communications in fading channels typical of planetary entry missions.
The NASA Spacecraft Transponding Modem
NASA Technical Reports Server (NTRS)
Berner, Jeff B.; Kayalar, Selahattin; Perret, Jonathan D.
2000-01-01
A new deep space transponder is being developed by the Jet Propulsion Laboratory for NASA. The Spacecraft Transponding Modem (STM) implements the standard transponder functions and the channel service functions that have previously resided in spacecraft Command/Data Subsystems. The STM uses custom ASICs, MMICs, and MCMs to reduce the active device parts count to 70, mass to I kg, and volume to 524 cc. The first STMs will be flown on missions launching in the 2003 time frame. The STM tracks an X-band uplink signal and provides both X-band and Ka-band downlinks, either coherent or non-coherent with the uplink. A NASA standard Command Detector Unit is integrated into the STM, along with a codeblock processor and a hardware command decoder. The decoded command codeblocks are output to the spacecraft command/data subsystem. Virtual Channel 0 (VC-0) (hardware) commands are processed and output as critical controller (CRC) commands. Downlink telemetry is received from the spacecraft data subsystem as telemetry frames. The STM provides the following downlink coding options: the standard CCSDS (7-1/2) convolutional coding, ReedSolomon coding with interleave depths one and five, (15-1/6) convolutional coding, and Turbo coding with rates 1/3 and 1/6. The downlink symbol rates can be linearly ramped to match the G/T curve of the receiving station, providing up to a 1 dB increase in data return. Data rates range from 5 bits per second (bps) to 24 Mbps, with three modulation modes provided: modulated subcarrier (3 different frequencies provided), biphase-L modulated direct on carrier, and Offset QPSK. Also, the capability to generate one of four non-harmonically related telemetry beacon tones is provided, to allow for a simple spacecraft status monitoring scheme for cruise phases of missions. Three ranging modes are provided: standard turn around ranging, regenerative pseudo-noise (PN) ranging, and Differential One-way Ranging (DOR) tones. The regenerative ranging provides the capability of increasing the ground received ranging SNR by up to 30 dB. Two different avionics interfaces to the command/data subsystem's data bus are provided: a MIL STD 1553B bus or an industry standard PCI interface. Digital interfaces provide the capability to control antenna selection (e.g., switching between high gain and low gain antennas) and antenna pointing (for future steered Ka-band antennas).
DOT National Transportation Integrated Search
2001-11-01
This report is the second in a series focusing on methods to determine the puncture velocity of railroad tank car shells. In this : context, puncture velocity refers to the impact velocity at which a coupler will completely pierce the shell and punct...
Doherty, Carolynne M; Forbes, Raeburn B
2014-01-01
Diagnostic Lumbar Puncture is one of the most commonly performed invasive tests in clinical medicine. Evaluation of an acute headache and investigation of inflammatory or infectious disease of the nervous system are the most common indications. Serious complications are rare, and correct technique will minimise diagnostic error and maximise patient comfort. We review the technique of diagnostic Lumbar Puncture including anatomy, needle selection, needle insertion, measurement of opening pressure, Cerebrospinal Fluid (CSF) specimen handling and after care. We also make some quality improvement suggestions for those designing services incorporating diagnostic Lumbar Puncture. PMID:25075138
Point of impact: the effect of size and speed on puncture mechanics
Anderson, P. S. L.; LaCosse, J.; Pankow, M.
2016-01-01
The use of high-speed puncture mechanics for prey capture has been documented across a wide range of organisms, including vertebrates, arthropods, molluscs and cnidarians. These examples span four phyla and seven orders of magnitude difference in size. The commonality of these puncture systems offers an opportunity to explore how organisms at different scales and with different materials, morphologies and kinematics perform the same basic function. However, there is currently no framework for combining kinematic performance with cutting mechanics in biological puncture systems. Our aim here is to establish this framework by examining the effects of size and velocity in a series of controlled ballistic puncture experiments. Arrows of identical shape but varying in mass and speed were shot into cubes of ballistic gelatine. Results from high-speed videography show that projectile velocity can alter how the target gel responds to cutting. Mixed models comparing kinematic variables and puncture patterns indicate that the kinetic energy of a projectile is a better predictor of penetration than either momentum or velocity. These results form a foundation for studying the effects of impact on biological puncture, opening the door for future work to explore the influence of morphology and material organization on high-speed cutting dynamics. PMID:27274801
Jabbari, Ali; Alijanpour, Ebrahim; Mir, Mehrafza; Bani hashem, Nadia; Rabiea, Seyed Mozaffar; Rupani, Mohammad Ali
2013-01-01
Post spinal puncture headache (PSPH) is a well known complication of spinal anesthesia. It occurs after spinal anesthesia induction due to dural and arachnoid puncture and has a significant effect on the patient’s postoperative well being. This manuscript is based on an observational study that runs on Babol University of Medical Sciences and review of literatures about current concepts about the incidence, risk factors and predisposing factors of post spinal puncture headache. The overall incidence of post-dural puncture headache after intentional dural puncture varies form 0.1-36%, while it is about 3.1% by atraumatic spinal needle 25G Whitacre. 25G Quincke needle with a medium bevel cutting is popular with widespread use and the incidence of PSPH is about 25%, but its incidence obtained 17.3% by spinal needle 25G Quincke in our observation. The association of predisposing factors like female, young age, pregnancy, low body mass index, multiple dural puncture, inexpert operators and past medical history of chronic headache, expose the patient to PSPH. The identification of factors that predict the likelihood of PSPH is important so that measures can be taken to minimize this painful complication resulting from spinal anesthesia. PMID:24009943
Reliability and performance of innovative surgical double-glove hole puncture indication systems.
Edlich, Richard F; Wind, Tyler C; Heather, Cynthia L; Thacker, John G
2003-01-01
During operative procedures, operating room personnel wear sterile surgical gloves designed to protect them and their patients against transmissible infections. The Food and Drug Administration (FDA) has set compliance policy guides for manufacturers of gloves. The FDA allows surgeons' gloves whose leakage defect rates do not exceed 1.5 acceptable quality level (AQL) to be used in operating rooms. The implications of this policy are potentially enormous to operating room personnel and patients. This unacceptable risk to the personnel and patient could be significantly reduced by the use of sterile double surgical gloves. Because double-gloves are also susceptible to needle puncture, a double-glove hole indication system is urgently needed to immediately detect surgical needle glove punctures. This warning would allow surgeons to remove the double-gloves, wash their hands, and then don a sterile set of double-gloves with an indication system. During the last decade, Regent Medical has devised non-latex and latex double-glove hole puncture indication systems. The purpose of this comprehensive study is to detect the accuracy of the non-latex and latex double-glove hole puncture indication systems using five commonly used sterile surgical needles: the taper point surgical needle, tapercut surgical needle, reverse cutting edge surgical needle, taper cardiopoint surgical needle, and spatula surgical needle. After subjecting both the non-latex and latex double-glove hole puncture indication systems to surgical needle puncture in each glove fingertip, these double-glove systems were immersed in a sterile basin of saline, after which the double-gloved hands manipulated surgical instruments. Within two minutes, both the non-latex and latex hole puncture indication systems accurately detected needle punctures in all of the surgical gloves, regardless of the dimensions of the surgical needles. In addition, the size of the color change visualized through the translucent outer glove did not correlate with needle diameter. On the basis of this extensive experimental evaluation, both the non-latex and latex double-glove hole puncture indication systems should be used in all operative procedures by all operating room personnel.
Riga, Celia V; Bicknell, Colin D; Basra, Melvinder; Hamady, Mohamad; Cheshire, Nicholas J W
2013-08-01
To investigate the quality of stent-graft fenestrations created in vitro using different needle puncture and balloon dilation angles in different commercial endografts. Fenestrations were made in a standardized fashion in 3 different endograft types: Talent monofilament twill woven polyester, Zenith multifilament tubular woven polyester, and Endofit thin-walled expanded polytetrafluoroethylene (PTFE). Punctures were made at 30°, 60°, and 90° angles using a 20-G needle and dilated using 6-mm standard and 7-mm cutting balloons; at least 6 fenestrations were made at each angle with standard balloons and at least 6 with cutting balloons. The 137 fenestrations were examined under light microscopy; quantitative and qualitative digital image analysis was performed to determine size, shape, and fenestration quality. PTFE grafts were easier to puncture/dilate, resulting in larger, elliptical fenestrations with overall better quality than the Dacron grafts; however, the puncture/dilation angle made an impact on the shape and quality of fenestrations. A significant number of fabric tears were observed in PTFE fabric at <90° puncture/dilation angles compared to Dacron grafts. In Dacron grafts, fenestration quality was significantly higher with 90° puncture/dilation angles (higher in Talent grafts). Cutting balloon use resulted in significantly more fabric tears and poor quality fenestrations in all graft types. Different endografts behave significantly differently when fenestrations are fashioned. Optimum puncture/dilation is important when considering in vivo fenestration techniques. Improvements in instrumentation, materials, and techniques are required to make this a reliable and reproducible endovascular option.
Membrillo-Romero, Alejandro; Gonzalez-Lanzagorta, Rubén; Rascón-Martínez, Dulce María
Puncture biopsy and fine needle aspiration guided by endoscopic ultrasound has been used as an effective technique and is quickly becoming the procedure of choice for diagnosis and staging in patients suspected of having pancreatic cancer. This procedure has replaced retrograde cholangiopancreatography and brush cytology due to its higher sensitivity for diagnosis, and lower risk of complications. To assess the levels of pancreatic enzymes amylase and lipase, after the puncture biopsy and fine needle aspiration guided by endoscopic ultrasound in pancreatic lesions and the frequency of post-puncture acute pancreatitis. A longitudinal and descriptive study of consecutive cases was performed on outpatients submitted to puncture biopsy and fine needle aspiration guided by endoscopic ultrasound in pancreatic lesions. Levels of pancreatic enzymes such as amylase and lipase were measured before and after the pancreatic puncture. Finally we documented post-puncture pancreatitis cases. A total of 100 patients who had been diagnosed with solid and cystic lesions were included in the study. Significant elevation was found at twice the reference value for lipase in 5 cases (5%) and for amylase in 2 cases (2%), none had clinical symptoms of acute pancreatitis. Eight (8%) of patients presented with mild nonspecific pain with no enzyme elevation compatible with pancreatitis. Pancreatic biopsy needle aspiration guided by endoscopic ultrasound was associated with a low rate of elevated pancreatic enzymes and there were no cases of post-puncture pancreatitis. Copyright © 2016 Academia Mexicana de Cirugía A.C. Publicado por Masson Doyma México S.A. All rights reserved.
STDP-based spiking deep convolutional neural networks for object recognition.
Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée
2018-03-01
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.
16 CFR 1500.18 - Banned toys and other banned articles intended for use by children.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., or loose small objects that have the potential for causing lacerations, puncture wound injury... deliberately removed by a child, which toy has the potential for causing laceration, puncture wound injury... external components that have the potential for causing laceration, puncture wound injury, or other similar...
Technological advances and changing indications for lumbar puncture in neurological disorders.
Costerus, Joost M; Brouwer, Matthijs C; van de Beek, Diederik
2018-03-01
Technological advances have changed the indications for and the way in which lumbar puncture is done. Suspected CNS infection remains the most common indication for lumbar puncture, but new molecular techniques have broadened CSF analysis indications, such as the determination of neuronal autoantibodies in autoimmune encephalitis. New screening techniques have increased sensitvity for pathogen detection and can be used to identify pathogens that were previously unknown to cause CNS infections. Evidence suggests that potential treatments for neurodegenerative diseases, such as Alzheimer's disease, will rely on early detection of the disease with the use of CSF biomarkers. In addition to being used as a diagnostic tool, lumbar puncture can also be used to administer intrathecal treatments as shown by studies of antisense oligonucleotides in patients with spinal muscular atrophy. Lumbar puncture is generally a safe procedure but complications can occur, ranging from minor (eg, back pain) to potentially devastating (eg, cerebral herniation). Evidence that an atraumatic needle tip design reduces complications of lumbar puncture is compelling, and reinforces the need to change clinical practice. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Space Telescope SI C&DH system. [Scientific Instrument Control and Data Handling Subsystem
NASA Technical Reports Server (NTRS)
Gadwal, Govind R.; Barasch, Ronald S.
1990-01-01
The Hubble Space Telescope Scientific Instrument Control and Data Handling Subsystem (SI C&DH) is designed to interface with five scientific instruments of the Space Telescope to provide ground and autonomous control and collect health and status information using the Standard Telemetry and Command Components (STACC) multiplex data bus. It also formats high throughput science data into packets. The packetized data is interleaved and Reed-Solomon encoded for error correction and Pseudo Random encoded. An inner convolutional coding with the outer Reed-Solomon coding provides excellent error correction capability. The subsystem is designed with the capacity for orbital replacement in order to meet a mission life of fifteen years. The spacecraft computer and the SI C&DH computer coordinate the activities of the spacecraft and the scientific instruments to achieve the mission objectives.
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
Saindane, A M; Qiu, D; Oshinski, J N; Newman, N J; Biousse, V; Bruce, B B; Holbrook, J F; Dale, B M; Zhong, X
2018-02-01
Intracranial pressure is estimated invasively by using lumbar puncture with CSF opening pressure measurement. This study evaluated displacement encoding with stimulated echoes (DENSE), an MR imaging technique highly sensitive to brain motion, as a noninvasive means of assessing intracranial pressure status. Nine patients with suspected elevated intracranial pressure and 9 healthy control subjects were included in this prospective study. Controls underwent DENSE MR imaging through the midsagittal brain. Patients underwent DENSE MR imaging followed immediately by lumbar puncture with opening pressure measurement, CSF removal, closing pressure measurement, and immediate repeat DENSE MR imaging. Phase-reconstructed images were processed producing displacement maps, and pontine displacement was calculated. Patient data were analyzed to determine the effects of measured pressure on pontine displacement. Patient and control data were analyzed to assess the effects of clinical status (pre-lumbar puncture, post-lumbar puncture, or control) on pontine displacement. Patients demonstrated imaging findings suggesting chronically elevated intracranial pressure, whereas healthy control volunteers demonstrated no imaging abnormalities. All patients had elevated opening pressure (median, 36.0 cm water), decreased by the removal of CSF to a median closing pressure of 17.0 cm water. Patients pre-lumbar puncture had significantly smaller pontine displacement than they did post-lumbar puncture after CSF pressure reduction ( P = .001) and compared with controls ( P = .01). Post-lumbar puncture patients had statistically similar pontine displacements to controls. Measured CSF pressure in patients pre- and post-lumbar puncture correlated significantly with pontine displacement ( r = 0.49; P = .04). This study establishes a relationship between pontine displacement from DENSE MR imaging and measured pressure obtained contemporaneously by lumbar puncture, providing a method to noninvasively assess intracranial pressure status in idiopathic intracranial hypertension. © 2018 by American Journal of Neuroradiology.
Clinical value of a self-designed training model for pinpointing and puncturing trigeminal ganglion.
He, Yu-Quan; He, Shu; Shen, Yun-Xia; Qian, Cheng
2014-04-01
OBJECTIVES. A training model was designed for learners and young physicians to polish their skills in clinical practices of pinpointing and puncturing trigeminal ganglion. METHODS. A head model, on both cheeks of which the deep soft tissue was replaced by stuffed organosilicone and sponge while the superficial soft tissue, skin and the trigeminal ganglion were made of organic silicon rubber for an appearance of real human being, was made from a dried skull specimen and epoxy resin. Two physicians who had experiences in puncturing foramen ovale and trigeminal ganglion were selected to test the model, mainly for its appearance, X-ray permeability, handling of the puncture, and closure of the puncture sites. Four inexperienced physicians were selected afterwards to be trained combining Hartel's anterior facial approach with the new method of real-time observation on foramen ovale studied by us. RESULTS. Both appearance and texture of the model were extremely close to those of a real human. The fact that the skin, superficial soft tissue, deep muscles of the cheeks, and the trigeminal ganglion made of organic silicon rubber all had great elasticity resulted in quick closure and sealing of the puncture sites. The head model made of epoxy resin had similar X-ray permeability to a human skull specimen under fluoroscopy. The soft tissue was made of radiolucent material so that the training can be conducted with X-ray guidance. After repeated training, all the four young physicians were able to smoothly and successfully accomplish the puncture. CONCLUSION. This self-made model can substitute for cadaver specimen in training learners and young physicians on foramen ovale and trigeminal ganglion puncture. It is very helpful for fast learning and mastering this interventional operation skill, and the puncture accuracy can be improved significantly with our new method of real-time observation on foramen ovale.
Kaddoum, Roland; Motlani, Faisal; Kaddoum, Romeo N; Srirajakalidindi, Arvi; Gupta, Deepak; Soskin, Vitaly
2014-08-01
One of the controversial management options for accidental dural puncture in pregnant patients is the conversion of labor epidural analgesia to continuous spinal analgesia by threading the epidural catheter intrathecally. No clear consensus exists on how to best prevent severe headache from occurring after accidental dural puncture. To investigate whether the intrathecal placement of an epidural catheter following accidental dural puncture impacts the incidence of postdural puncture headache (PDPH) and the subsequent need for an epidural blood patch in parturients. A retrospective chart review of accidental dural puncture was performed at Hutzel Women's Hospital in Detroit, MI, USA for the years 2002-2010. Documented cases of accidental dural punctures (N = 238) were distributed into two groups based on their management: an intrathecal catheter (ITC) group in which the epidural catheter was inserted intrathecally and a non-intrathecal catheter (non-ITC) group that received the epidural catheter inserted at different levels of lumbar interspaces. The incidence of PDPH as well as the necessity for epidural blood patch was analyzed using two-tailed Fisher's exact test. In the non-ITC group, 99 (54 %) parturients developed PDPH in comparison to 20 (37 %) in the ITC [odds ratio (OR), 1.98; 95 % confidence interval (CI), 1.06-3.69; P = 0.03]. Fifty-seven (31 %) of 182 patients in the non-ITC group required an epidural blood patch (EBP) (data for 2 patients of 184 were missing). In contrast, 7 (13 %) of parturients in the ITC group required an EBP. The incidence of EBP was calculated in parturients who actually developed headache to be 57 of 99 (57 %) in the non-ITC group versus 7 of 20 (35 %) in the ITC group (OR, 2.52; 95 % CI, 0.92-6.68; P = 0.07). The insertion of an intrathecal catheter following accidental dural puncture decreases the incidence of PDPH but not the need for epidural blood patch in parturients.
Ibrahim, Irwani; Yau, Ying Wei; Ong, Lizhen; Chan, Yiong Huak; Kuan, Win Sen
2015-03-01
Arterial punctures are important procedures performed by emergency physicians in the assessment of ill patients. However, arterial punctures are painful and can create anxiety and needle phobia in patients. The pain score of radial arterial punctures were compared between the insulin needle and the standard 23-gauge hypodermic needle. In a randomized controlled crossover design, healthy volunteers were recruited to undergo bilateral radial arterial punctures. They were assigned to receive either the insulin or the standard needle as the first puncture, using blocked randomization. The primary outcome was the pain score measured on a 100-mm visual analogue scale (VAS) for pain, and secondary outcomes were rate of hemolysis, mean potassium values, and procedural complications immediately and 24 hours postprocedure. Fifty healthy volunteers were included in the study. The mean (±standard deviation) VAS score in punctures with the insulin needle was lower than the standard needle (23 ± 22 mm vs. 39 ± 24 mm; mean difference = -15 mm; 95% confidence interval = -22 mm to -7 mm; p < 0.001). The rates of hemolysis and mean potassium value were greater in samples obtained using the insulin needle compared to the standard needle (31.3% vs. 11.6%, p = 0.035; and 4.6 ±0.7 mmol/L vs. 4.2 ±0.5 mmol/L, p = 0.002). Procedural complications were lower in punctures with the insulin needle both immediately postprocedure (0% vs. 24%; p < 0.001) and at 24 hours postprocedure (5.4% vs. 34.2%; p = 0.007). Arterial punctures using insulin needles cause less pain and fewer procedural complications compared to standard needles. However, due to the higher rate of hemolysis, its use should be limited to conditions that do not require a concurrent potassium value in the same blood sample. © 2015 by the Society for Academic Emergency Medicine.
Diego, Rodrigo; Douet, Cécile; Reigner, Fabrice; Blard, Thierry; Cognié, Juliette; Deleuze, Stefan; Goudet, Ghylène
2016-10-15
Transvaginal ultrasound-guided follicular punctures are widely used in the mare for diagnosis, research, and commercial applications. The objective of our study was to determine their influence on pain, stress, and well-being in the mare, by evaluating heart rate, breath rate, facial expression changes, and salivary cortisol before, during, and after puncture. For this experiment, 21 pony mares were used. Transvaginal ultrasound-guided aspirations were performed on 11 mares. After injections for sedation, analgesia, and antispasmodia, the follicles from both ovaries were aspirated with a needle introduced through the vagina wall into the ovary. In the control group, 10 mares underwent similar treatments and injections, but no follicular aspiration. Along the session, heart rate and breath rate were evaluated by a trained veterinarian, ears position, eyelid closure, and contraction of facial muscles were evaluated, and salivary samples were taken for evaluation of cortisol concentration. A significant relaxation was observed after sedative injection in the punctured and control mares, according to ear position, eyelid closure, and contraction of facial muscles, but no difference between punctured and control animals was recorded. No significant modification of salivary cortisol concentration during puncture and no difference between punctured and control mares at any time were observed. No significant modification of the breath rate was observed along the procedure for the punctured and the control mares. Heart rate increased significantly but transiently when the needle was introduced in the ovary and was significantly higher at that time for the punctured mares than that for control mares. None of the other investigated parameters were affected at that time, suggesting discomfort is minimal and transient. Improving analgesia, e.g., through a multimodal approach, during that possibly more sensitive step could be recommended. The evaluation of facial expression changes and heart rate is easy-to-use and accurate tools to evaluate pain and well-being of the mare. Copyright © 2016 Elsevier Inc. All rights reserved.
Percutaneous Direct Puncture Embolization with N-butyl-cyanoacrylate for High-flow Priapism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tokue, Hiroyuki, E-mail: tokue@s2.dion.ne.jp; Shibuya, Kei; Ueno, Hiroyuki
There are many treatment options in high-flow priapism. Those mentioned most often are watchful waiting, Doppler-guided compression, endovascular highly selective embolization, and surgery. We present a case of high-flow priapism in a 57-year-old man treated by percutaneous direct puncture embolization of a post-traumatic left cavernosal arteriovenous fistula using N-butyl-cyanoacrylate. Erectile function was preserved during a 12-month follow-up. No patients with percutaneous direct puncture embolization for high-flow priapism have been reported previously. Percutaneous direct puncture embolization is a potentially useful and safe method for management of high-flow priapism.
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
Caskey, Rachel N; Abutahoun, Angelos; Polick, Anne; Barnes, Michelle; Srivastava, Pavan; Boyd, Andrew D
2018-05-04
The US health care system uses diagnostic codes for billing and reimbursement as well as quality assessment and measuring clinical outcomes. The US transitioned to the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) on October, 2015. Little is known about the impact of ICD-10-CM on internal medicine and medicine subspecialists. We used a state-wide data set from Illinois Medicaid specified for Internal Medicine providers and subspecialists. A total of 3191 ICD-9-CM codes were used for 51,078 patient encounters, for a total cost of US $26,022,022 for all internal medicine. We categorized all of the ICD-9-CM codes based on the complexity of mapping to ICD-10-CM as codes with complex mapping could result in billing or administrative errors during the transition. Codes found to have complex mapping and frequently used codes (n = 295) were analyzed for clinical accuracy of mapping to ICD-10-CM. Each subspecialty was analyzed for complexity of codes used and proportion of reimbursement associated with complex codes. Twenty-five percent of internal medicine codes have convoluted mapping to ICD-10-CM, which represent 22% of Illinois Medicaid patients, and 30% of reimbursements. Rheumatology and Endocrinology had the greatest proportion of visits and reimbursement associated with complex codes. We found 14.5% of ICD-9-CM codes used by internists, when mapped to ICD-10-CM, resulted in potential clinical inaccuracies. We identified that 43% of diagnostic codes evaluated and used by internists and that account for 14% of internal medicine reimbursements are associated with codes which could result in administrative errors.
NASA Astrophysics Data System (ADS)
Li, Yan-Ming; Liang, Zhen-Zhen; Song, Chun-Lei
2016-05-01
To compare the effect of 3 kinds of different materials on the hemostasis of puncture site after central venous catheterization. Method: A selection of 120 patients with peripheral central venous catheter chemotherapy in the Affiliated Hospital of our university from January 2014 to April 2015, Randomly divided into 3 groups, using the same specification (3.5cm × 2cm) alginate gelatin sponge and gauze dressing, 3 kinds of material compression puncture point, 3 groups of patients after puncture 24 h within the puncture point of local blood and the catheter after the catheter 72 h within the catheter maintenance costs. Result: (1) The local infiltration of the puncture point in the 24 h tube: The use of alginate dressing and gelatin sponge hemostatic effect is better than that of compression gauze. The difference was statistically significant (P <0.05). Compared with gelatin sponge and alginate dressing hemostatic effect, The difference was not statistically significant. (2) Tube maintenance cost: Puncture point using gelatin sponge, The local maintenance costs of the catheter within 72 h after insertion of the tube are lowest, compared with alginate dressing and gauze was significant (P<0.05). Conclusion: The choice of compression hemostasis material for the puncture site after PICC implantation, using gelatin sponge and gauze dressing is more effective and economic.
NASA Astrophysics Data System (ADS)
QingJie, Wei; WenBin, Wang
2017-06-01
In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval
Xie, Anwei; Shan, Yuying; Niu, Mei E; Chen, Yi; Wang, Xiya
2017-11-01
To describe experiences and nursing needs of school-age Chinese children undergoing lumbar puncture for the treatment of acute lymphoblastic leukaemia. Lumbar puncture is an invasive procedure, causing psychological changes and physical discomfort in patients. In a previous study, it was proved that distraction intervention, such as music therapy, relieves pain and anxiety. There is limited evidence regarding the experience and needs of school-age children during lumbar puncture after being diagnosed with acute lymphoblastic leukaemia. To minimise their anxiety and pain during the procedure, it is important to collect information directly from these children. A descriptive qualitative research. Twenty-one school-age children with acute lymphoblastic leukaemia participated in semi-structured interviews at a Children's Hospital in China. Data were collected by an experienced and trained interviewer. Qualitative content analysis was chosen to describe experiences of children undergoing lumbar puncture. While undergoing lumbar puncture for the treatment of acute lymphoblastic leukaemia, school-age Chinese children experienced complex psychological feelings (fear, tension, helplessness, sadness and anxiety). They also experienced physical discomfort. They had multipolar needs, such as information, communication, respect, self-actualisation, environment and equipment. This study identified important areas that must be closely monitored by healthcare staff, performing lumbar puncture on acute lymphoblastic leukaemia children. Thus, a successful and smooth procedure can be performed on these patients, and their quality of life can be improved. The experiences described in this study contribute to a better understanding of the needs of acute lymphoblastic leukaemia children undergoing lumbar puncture. They also provide valuable information to professional medical care staff that develops future nursing assessments. © 2016 John Wiley & Sons Ltd.
April, Michael D; Long, Brit; Koyfman, Alex
2017-09-01
Various sources purport an association between lumbar puncture and brainstem herniation in patients with intracranial mass effect lesions. Several organizations and texts recommend head computed tomography (CT) prior to lumbar puncture in selected patients. To review the evidence regarding the utility of obtaining head CT prior to lumbar puncture in adults with suspected bacterial meningitis. Observational studies report a risk of post-lumbar puncture brainstem herniation in the presence of intracranial mass effect (1.5%) that is significantly lower than that reported among all patients with bacterial meningitis (up to 13.3%). It is unclear from existing literature whether identifying patients with intracranial mass effect decreases herniation risk. Up to 80% of patients with bacterial meningitis experiencing herniation have no CT abnormalities, and approximately half of patients with intracranial mass effect not undergoing lumbar puncture herniate. Decision rules to selectively perform CT on only those individuals most likely to have intracranial mass effect lesions have not undergone validation. Despite recommendations for immediate antimicrobial therapy prior to imaging, data indicate an association between pre-lumbar puncture CT and antibiotic delays. Recent data demonstrate shortened door-to-antibiotic times and lower mortality from bacterial meningitis after implementation of new national guidelines, which restricted generally accepted CT indications by removing impaired mental status as imaging criterion. Data supporting routine head CT prior to lumbar puncture are limited. Physicians should consider selective CT for those patients at risk for intracranial mass effect lesions based on decision rules or clinical gestalt. Patients undergoing head CT must receive immediate antibiotic therapy. Published by Elsevier Inc.
Numerical method for computing Maass cusp forms on triply punctured two-sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, K. T.; Kamari, H. M.; Zainuddin, H.
2014-03-05
A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less
Ghaleb, Ahmed; Khorasani, Arjang; Mangar, Devanand
2012-01-01
Since August Bier reported the first case in 1898, post-dural puncture headache (PDPH) has been a problem for patients following dural puncture. Clinical and laboratory research over the last 30 years has shown that use of smaller-gauge needles, particularly of the pencil-point design, are associated with a lower risk of PDPH than traditional cutting point needle tips (Quincke-point needle). A careful history can rule out other causes of headache. A postural component of headache is the sine qua non of PDPH. In high-risk patients < 50 years, post-partum, in the event a large-gauge needle puncture is initiated, an epidural blood patch should be performed within 24–48 hours of dural puncture. The optimum volume of blood has been shown to be 12–20 mL for adult patients. Complications caused by autologous epidural blood patching (AEBP) are rare. PMID:22287846
Ghaleb, Ahmed
2010-01-01
Postdural puncture headache (PDPH) has been a problem for patients, following dural puncture, since August Bier reported the first case in 1898. His paper discussed the pathophysiology of low-pressure headache resulting from leakage of cerebrospinal fluid (CSF) from the subarachnoid to the epidural space. Clinical and laboratory research over the last 30 years has shown that use of small-gauge needles, particularly of the pencil-point design, is associated with a lower risk of PDPH than traditional cutting point needle tips (Quincke-point needle). A careful history can rule out other causes of headache. A postural component of headache is the sine qua non of PDPH. In high-risk patients , for example, age < 50 years, postpartum, large-gauge needle puncture, epidural blood patch should be performed within 24–48 h of dural puncture. The optimum volume of blood has been shown to be 12–20 mL for adult patients. Complications of AEBP are rare. PMID:20814596
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
A bandwidth efficient coding scheme for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J., Jr.
1991-01-01
As a demonstration of the performance capabilities of trellis codes using multidimensional signal sets, a Viterbi decoder was designed. The choice of code was based on two factors. The first factor was its application as a possible replacement for the coding scheme currently used on the Hubble Space Telescope (HST). The HST at present uses the rate 1/3 nu = 6 (with 2 (exp nu) = 64 states) convolutional code with Binary Phase Shift Keying (BPSK) modulation. With the modulator restricted to a 3 Msym/s, this implies a data rate of only 1 Mbit/s, since the bandwidth efficiency K = 1/3 bit/sym. This is a very bandwidth inefficient scheme, although the system has the advantage of simplicity and large coding gain. The basic requirement from NASA was for a scheme that has as large a K as possible. Since a satellite channel was being used, 8PSK modulation was selected. This allows a K of between 2 and 3 bit/sym. The next influencing factor was INTELSAT's intention of transmitting the SONET 155.52 Mbit/s standard data rate over the 72 MHz transponders on its satellites. This requires a bandwidth efficiency of around 2.5 bit/sym. A Reed-Solomon block code is used as an outer code to give very low bit error rates (BER). A 16 state rate 5/6, 2.5 bit/sym, 4D-8PSK trellis code was selected. This code has reasonable complexity and has a coding gain of 4.8 dB compared to uncoded 8PSK (2). This trellis code also has the advantage that it is 45 deg rotationally invariant. This means that the decoder needs only to synchronize to one of the two naturally mapped 8PSK signals in the signal set.
Deep multi-scale convolutional neural network for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Zhang, Feng-zhe; Yang, Xia
2018-04-01
In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.
Optimizations of a Hardware Decoder for Deep-Space Optical Communications
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon
2007-01-01
The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Emission Limits for Puncture Sealant Application Affected Sources 3 Table 3 to Subpart XXXX of Part 63 Protection of Environment ENVIRONMENTAL... Manufacturing Pt. 63, Subpt. XXXX, Table 3 Table 3 to Subpart XXXX of Part 63—Emission Limits for Puncture...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits for Puncture Sealant Application Control Devices 4 Table 4 to Subpart XXXX of Part 63 Protection of Environment ENVIRONMENTAL... Manufacturing Pt. 63, Subpt. XXXX, Table 4 Table 4 to Subpart XXXX of Part 63—Operating Limits for Puncture...
Gonenc, Berk; Tran, Nhat; Gehlbach, Peter; Taylor, Russell H.; Iordachita, Iulian
2018-01-01
Retinal vein cannulation is a demanding procedure where therapeutic agents are injected into occluded retina veins. The feasibility of this treatment is limited due to challenges in identifying the moment of venous puncture, achieving cannulation and maintaining it throughout the drug delivery period. In this study, we integrate a force-sensing microneedle with two distinct robotic systems: the handheld micromanipulator Micron, and the cooperatively controlled Steady-Hand Eye Robot (SHER). The sensed tool-to-tissue interaction forces are used to detect venous puncture and extend the robots’ standard control schemes with a new position holding mode (PHM) that assists the operator hold the needle position fixed and maintain cannulation for a longer time with less trauma on the vasculature. We evaluate the resulting systems comparatively in a dry phantom, stretched vinyl membranes. Results have shown that modulating the admittance control gain of SHER alone is not a very effective solution for preventing the undesired tool motion after puncture. However, after using puncture detection and PHM the deviation from the puncture point is significantly reduced, by 65% with Micron, and by 95% with SHER representing a potential advantage over freehand for both. PMID:28269417
A novel in vivo model of puncture-induced iris neovascularization.
Beaujean, Ophélie; Locri, Filippo; Aronsson, Monica; Kvanta, Anders; André, Helder
2017-01-01
To assess iris neovascularization by uveal puncture of the mouse eye and determine the role of angiogenic factors during iris neovascularization. Uveal punctures were performed on BalbC mouse eyes to induce iris angiogenesis. VEGF-blockage was used as an anti-angiogenic treatment, while normoxia- and hypoxia-conditioned media from retinal pigment epithelium (RPE) cells was used as an angiogenic-inducer in this model. Iris vasculature was determined in vivo by noninvasive methods. Iris blood vessels were stained for platelet endothelial cell adhesion molecule-1 and vascular sprouts were counted as markers of angiogenesis. Expression of angiogenic and inflammatory factors in the puncture-induced model were determined by qPCR and western blot. Punctures led to increased neovascularization and sprouting of the iris. qPCR and protein analysis showed an increase of angiogenic factors, particularly in the plasminogen-activating receptor and inflammatory systems. VEGF-blockage partly reduced iris neovascularization, and treatment with hypoxia-conditioned RPE medium led to a statistically significant increase in iris neovascularization. This study presents the first evidence of a puncture-induced iris angiogenesis model in the mouse. In a broader context, this novel in vivo model of neovascularization has the potential for noninvasive evaluation of angiogenesis modulating substances.
Communications terminal breadboard
NASA Technical Reports Server (NTRS)
1972-01-01
A baseline design is presented of a digital communications link between an advanced manned spacecraft (AMS) and an earth terminal via an Intelsat 4 type communications satellite used as a geosynchronous orbiting relay station. The fabrication, integration, and testing of terminal elements at each end of the link are discussed. In the baseline link design, the information carrying capacity of the link was estimated for both the forward direction (earth terminal to AMS) and the return direction, based upon orbital geometry, relay satellite characteristics, terminal characteristics, and the improvement that can be achieved by the use of convolutional coding/Viterbi decoding techniques.
Sun, Jiashu; Zhang, Haitao
2014-09-01
This paper was to analyze and contrast the damage rate on the thoracic segment different position of the dorsal root ganglion(dorsal root ganglion, DRG) caused by different puncture path in radiofrequency ablation, thus the best RF target way for the thoracic segment of different types of DRG was confirmed. According to the difference of puncture and ablation damage way, 14 segmental spinal specimens were randomly divided into three groups, and then conducted DRG radiofrequency damage on percutaneous puncture path according to the type of DRG position.The damage effect of different puncture path by the judgment standard of the result of pathology analyzed. The experiment showed that RF damage of group A were 72.58 ± 18.88%, 54.16 ± 24.84% and 32.85 ± 28.11%; that of group B were 771.86 ± 15.15% and 72.02 ± 17.86%, 57.14 ± 18.02% and 52.47 ± 20.64%, 68.75 ± 14.63% and 71.78 ± 16.00%; and that of group C were 82.46 ± 14.10%, 81.53 ± 11.81% and 80.83 ± 13.33%. It was concluded that the singleness of DRG puncture route is one of the important reasons for the poor thoracic segments DRG radiofrequency (RF) ablation effect. While according to the type of DRG different positions with double joint puncture path can significantly improve the rate of DRG RF damage.
Kyriazis, Iason; Kallidonis, Panagiotis; Vasilas, Marinos; Panagopoulos, Vasilios; Kamal, Wissam; Liatsikos, Evangelos
2017-05-01
To present our experience with a central, non-calyceal puncture protocol for percutaneous nephrolithotripsy (PCNL) in an attempt to challenge the opinion of worldwide adopted calyceal puncture as the less traumatic site of percutaneous entrance into the collecting system. During 2012, a total of 137 consecutive, unselected patients were subjected to PCNL in our department. Non-calyceal punctures were performed to all cases and followed by subsequent track dilations up to 30 Fr. Perioperative and postoperative data were prospectively collected and analyzed. Mean operative time (from skin puncture to nephrostomy tube placement) was 48 min. Patients with single, multiple and staghorn stones had primary stone-free rates of 89.2, 80.4 and 66.7 % after PCNL, respectively. The overall complication rate was 10.2 %, while bleeding complications were minimal. Only 4 patients (2.9 %) required blood transfusion. Five patients (3.6 %) had Clavien Grade IIIa complications requiring an intervention for their management and none Grade IV or V. Despite the absence of evidence that non-calyceal percutaneous tracts could be a risk factor for complications, the concept of calyceal puncture has been worldwide adopted by PCNL surgeons as the sole safe percutaneous entrance into the collective system. Based on our experience, other pathways than the worldwide recognized rule, calyceal puncture, are possible and probably not as dangerous as has been previously stated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, Michael M.; Goh, Gerard S.; Power, Sarah
PurposeTo prospectively compare the procedural time and complication rates of ultrasound-guided and fluoroscopy-assisted antegrade common femoral artery (CFA) puncture techniques.Materials and MethodsHundred consecutive patients, undergoing a vascular procedure for which an antegrade approach was deemed necessary/desirable, were randomly assigned to undergo either ultrasound-guided or fluoroscopy-assisted CFA puncture. Time taken from administration of local anaesthetic to vascular sheath insertion in the superficial femoral artery (SFA), patients’ age, body mass index (BMI), fluoroscopy radiation dose, haemostasis method and immediate complications were recorded. Mean and median values were calculated and statistically analysed with unpaired t tests.ResultsSixty-nine male and 31 female patients underwent antegrademore » puncture (mean age 66.7 years). The mean BMI was 25.7 for the ultrasound-guided (n = 53) and 25.3 for the fluoroscopy-assisted (n = 47) groups. The mean time taken for the ultrasound-guided puncture was 7 min 46 s and for the fluoroscopy-assisted technique was 9 min 41 s (p = 0.021). Mean fluoroscopy dose area product in the fluoroscopy group was 199 cGy cm{sup 2}. Complications included two groin haematomas in the ultrasound-guided group and two retroperitoneal haematomas and one direct SFA puncture in the fluoroscopy-assisted group.ConclusionUltrasound-guided technique is faster and safer for antegrade CFA puncture when compared to the fluoroscopic-assisted technique alone.« less
Gibson, M A; Carell, E S
1997-11-01
The advent of transvenous right heart catheterization has relegated direct transthoracic right ventricular puncture largely to the role of "interesting historical footnote." However, in the case of a right ventricle that is "protected" by a mechanical tricuspid valve prosthesis, direct right ventricular puncture represents a reasonable alternative for obtaining accurate hemodynamic information.
Development of a new bench for puncturing of irradiated fuel rods in STAR hot laboratory
NASA Astrophysics Data System (ADS)
Petitprez, B.; Silvestre, P.; Valenza, P.; Boulore, A.; David, T.
2018-01-01
A new device for puncturing of irradiated fuel rods in commercial power plants has been designed by Fuel Research Department of CEA Cadarache in order to provide experimental data of high precision on fuel pins with various designs. It will replace the current set-up that has been used since 1998 in hot cell 2 of STAR facility with more than 200 rod puncturing experiments. Based on this consistent experimental feedback, the heavy-duty technique of rod perforation by clad punching has been preserved for the new bench. The method of double expansion of rod gases is also retained since it allows upgrading the confidence interval of volumetric results obtained from rod puncturing. Furthermore, many evolutions have been introduced in the new design in order to improve its reliability, to make the maintenance easier by remote handling and to reduce experimental uncertainties. Tightness components have been studied with Sealing Laboratory Maestral at Pierrelatte so as to make them able to work under mixed pressure conditions (from vacuum at 10-5 mbar up to pressure at 50 bars) and to lengthen their lifetime under permanent gamma irradiation in hot cell. Bench ergonomics has been optimized to make its operating by remote handling easier and to secure the critical phases of a puncturing experiment. A high pressure gas line equipped with high precision pressure sensors out of cell can be connected to the bench in cell for calibration purposes. Uncertainty analyses using Monte Carlo calculations have been performed in order to optimize capacity of the different volumes of the apparatus according to volumetric characteristics of the rod to be punctured. At last this device is composed of independent modules which allow puncturing fuel pins out of different geometries (PWR, BWR, VVER). After leak tests of the device and remote handling simulation in a mock-up cell, several punctures of calibrated specimens have been performed in 2016. The bench will be implemented soon in hot cell 2 of STAR facility for final qualification tests. PWR rod punctures are already planned for 2018.
Cheng, Ka Yan; Chair, Sek Ying; Choi, Kai Chow
2013-10-01
Transradial coronary angiography (CA) and percutaneous coronary intervention (PCI) are gaining worldwide popularity due to the low incidence of major vascular complications and early mobilization of patients post procedures. Although post transradial access site complications are generally considered as minor in nature, they are not being routinely recorded in clinical settings. To evaluate the incidence of access site complications and level of puncture site pain experienced by patients undergoing transradial coronary procedures and to examine factors associated with access site complications occurrence and puncture site pain severity. A cross-sectional correlational study of 85 Chinese speaking adult patients scheduled for elective transradial CA and or PCI. Ecchymosis, bleeding, hematoma and radial artery occlusion (RAO) were assessed through observation, palpation and plethysmographic signal of pulse oximetry after coronary procedures. Puncture site pain was assessed with a 100mm Visual Analogue Scale. Factors that were related to access site complications and puncture site pain were obtained from medical records. Ecchymosis was the most commonly reported transradial access site complication in this study. Paired t-test showed that the level of puncture site pain at 24 h was significantly (p<0.001) lower than that at 3 h after the procedure. Stepwise multivariable regression showed that female gender and shorter sheath time were found to be significantly associated with bleeding during gradual deflation of compression device. Only longer sheath time was significantly associated with RAO. Female gender and larger volume of compression air were associated with the presence of ecchymosis and puncture site pain at 3 h after procedure, respectively. The study findings suggest that common access site complications post transradial coronary procedures among Chinese population are relatively minor in nature. Individual puncture site pain assessment during the period of hemostasis is important. Nurses should pay more attention to factors such as female gender, sheath time and volume of compression that are more likely to be associated with transradial access site complications and puncture site pain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
Mission science value-cost savings from the Advanced Imaging Communication System (AICS)
NASA Technical Reports Server (NTRS)
Rice, R. F.
1984-01-01
An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.
Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection
NASA Astrophysics Data System (ADS)
Cabrera-Vives, Guillermo; Reyes, Ignacio; Förster, Francisco; Estévez, Pablo A.; Maureira, Juan-Carlos
2017-02-01
We introduce Deep-HiTS, a rotation-invariant convolutional neural network (CNN) model for classifying images of transient candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RFs). We show that our CNN significantly outperforms the RF model, reducing the error by almost half. Furthermore, for a fixed number of approximately 2000 allowed false transient candidates per night, we are able to reduce the misclassified real transients by approximately one-fifth. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope. We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS. Deep-HiTS is licensed under the terms of the GNU General Public License v3.0.
Xu, W; LeBeau, J M
2018-05-01
We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
Fan, Guoxin; Guan, Xiaofei; Sun, Qi; Hu, Annan; Zhu, Yanjie; Gu, Guangfei; Zhang, Hailong; He, Shisheng
2015-01-01
Percutaneous transforaminal endoscopic discectomy (PTED) usually requires numerous punctures under X-ray fluoroscopy. Repeated puncture will lead to more radiation exposure and reduce the beginners' confidence. This cadaver study aimed to investigate the efficacy of HE's Lumbar Location (HELLO) system in puncture reduction of PTED. Cadaver study. Comparative groups. HELLO system consists of self-made surface locator and puncture locator. One senior surgeon conducted the puncture procedure of PTED on the left side of 20 cadavers at L4/L5 and L5/S1 level with the assistance of HELLO system (Group A). Additionally, the senior surgeon conducted the puncture procedure of PTED on the right side of the cadavers at L4/L5 and L5/S1 level with traditional methods (Group B). On the other hand, an inexperienced surgeon conducted the puncture procedure of PTED on the left side of the cadavers at L4/L5 and L5/S1 level with the assistance of our HELLO system (Group C). At L4/L5 level, there was significant difference in puncture times between Group A and Group B (P<0.001), but no significant difference was observed between Group A and Group C (P = 0.811). Similarly at L5/S1 level, there was significant difference in puncture times between Group A and Group B (P<0.001), but no significant difference was observed between Group A and Group C (P = 0.981). At L4/L5 level, there was significant difference in fluoroscopy time between Group A and Group B (P<0.001), but no significant difference was observed between Group A and Group C (P = 0.290). Similarly at L5/S1 level, there was significant difference in fluoroscopy time between Group A and Group B (P<0.001), but no significant difference was observed between Group A and Group C (P = 0.523). As for radiation exposure, HELLO system reduced 39%-45% radiation dosage when comparing Group A and Group B, but there was no significant difference in radiation exposure between Group A and Group C whatever at L4/L5 level or L5/S1 level (P>0.05). There was no difference in location time between Group A and Group B or Group A and Group C either at L4/L5 level or L5/S1 level (P>0.05). Small-sample preclinical study. HELLO system was effective in reducing puncture times, fluoroscopy time and radiation exposure, as well as the difficulty of learning PTED. (2015-RES-127).
Serang, Oliver
2015-08-01
Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.
Ma, Kai; Huang, Xiao-bo; Xiong, Liu-lin; Xu, Qing-quan; Xu, Tao; Ye, Hai-yun; Yu, Lu-ping; Wang, Xiao-feng
2014-08-18
To evaluate the feasibility and efficacy of percutaneous renal puncture in percutaneous nephrolithotomy guided by novel needle-tracking ultrasound system. From may to october 2013, 16 cases of percutaneous nephrolithotomy were performed under the guidance of ultrasound system. The clinical data including the time of completing percutaneous renal puncture, the color of urine sucked out from the kidney calices, and the complications were analyzed retrospectively. Of the 16 patients, 18 percutaneous renal access were established guided by ultrasound system. All of them were successtul for the first time, and the average time of completing percutaneous renal punctures was (26.90 ± 11.37) s (15 to 54 s). After the operation, the hemoglobin decreased by (9.56 ± 5.27)%(1.41% to 24.06%), and no complications occurred except for postoperative fever in 2 case. The novel ultrasound system is a safe and effective technique that can reduce the technical difficulty of percutaneous renal puncture in percutaneous nephrolithotomy.
Use of Lumbar Punctures in the Management of Ocular Syphilis.
Reekie, Ian; Reddy, Yaviche
2018-01-01
Ocular syphilis has become rare in the developed world, but is a common presentation to ophthalmology departments in South Africa. We investigated the proportion of patients diagnosed with ocular syphilis who went on to receive lumbar punctures, and determined the fraction of these who had cerebrospinal fluid findings suggestive of neurosyphilis. We aimed to determine whether the use of lumbar punctures in ocular syphilis patients was beneficial in picking up cases of neurosyphilis. Retrospective study of case notes of patients admitted to two district hospitals in Durban, South Africa, with ocular syphilis over a 20-month period. A total of 31 of 68 ocular syphilis patients underwent lumbar puncture, and of these, eight (25.8%) had findings suggestive of neurosyphilis. Lumbar puncture in ocular syphilis patients should continue to be a routine part of the investigation of these patients; a large proportion of ocular syphilis patients show cerebrospinal fluid findings suggestive of neurosyphilis, are at risk of the complications of neurosyphilis, and should be managed accordingly.
Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.
Korecki, P; Roszczynialski, T P; Sowa, K M
2015-04-06
In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.
Learning Midlevel Auditory Codes from Natural Sound Statistics.
Młynarski, Wiktor; McDermott, Josh H
2018-03-01
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Praveen, Alampath; Sreekumar, Karumathil Pullara; Nazar, Puthukudiyil Kader; Moorthy, Srikanth
2012-04-01
Thoracic duct embolization (TDE) is an established radiological interventional procedure for thoracic duct injuries. Traditionally, it is done under fluoroscopic guidance after opacifying the thoracic duct with bipedal lymphangiography. We describe our experience in usinga heavily T2W sequence for guiding thoracic duct puncture and direct injection of glue through the puncture needle without cannulating the duct.
Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John
2018-05-01
The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.
Zhao, Yu; Ge, Fangfei; Liu, Tianming
2018-07-01
fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.
DNA duplication is essential for the repair of gastrointestinal perforation in the insect midgut
Huang, Wuren; Zhang, Jie; Yang, Bing; Beerntsen, Brenda T.; Song, Hongsheng; Ling, Erjun
2016-01-01
Invertebrate animals have the capacity of repairing wounds in the skin and gut via different mechanisms. Gastrointestinal perforation, a hole in the human gastrointestinal system, is a serious condition, and surgery is necessary to repair the perforation to prevent an abdominal abscess or sepsis. Here we report the repair of gastrointestinal perforation made by a needle-puncture wound in the silkworm larval midgut. Following insect gut perforation, only a weak immune response was observed because the growth of Escherichia coli alone was partially inhibited by plasma collected at 6 h after needle puncture of the larval midgut. However, circulating hemocytes did aggregate over the needle-puncture wound to form a scab. While, cell division and apoptosis were not observed at the wound site, the needle puncture significantly enhanced DNA duplication in cells surrounding the wound, which was essential to repair the midgut perforation. Due to the repair capacity and limited immune response caused by needle puncture to the midgut, this approach was successfully used for the injection of small compounds (ethanol in this study) into the insect midgut. Consequently, this needle-puncture wounding of the insect gut can be developed for screening compounds for use as gut chemotherapeutics in the future. PMID:26754166
A novel in vivo model of puncture-induced iris neovascularization
Aronsson, Monica; Kvanta, Anders
2017-01-01
Purpose To assess iris neovascularization by uveal puncture of the mouse eye and determine the role of angiogenic factors during iris neovascularization. Methods Uveal punctures were performed on BalbC mouse eyes to induce iris angiogenesis. VEGF-blockage was used as an anti-angiogenic treatment, while normoxia- and hypoxia-conditioned media from retinal pigment epithelium (RPE) cells was used as an angiogenic-inducer in this model. Iris vasculature was determined in vivo by noninvasive methods. Iris blood vessels were stained for platelet endothelial cell adhesion molecule-1 and vascular sprouts were counted as markers of angiogenesis. Expression of angiogenic and inflammatory factors in the puncture-induced model were determined by qPCR and western blot. Results Punctures led to increased neovascularization and sprouting of the iris. qPCR and protein analysis showed an increase of angiogenic factors, particularly in the plasminogen-activating receptor and inflammatory systems. VEGF-blockage partly reduced iris neovascularization, and treatment with hypoxia-conditioned RPE medium led to a statistically significant increase in iris neovascularization. Conclusions This study presents the first evidence of a puncture-induced iris angiogenesis model in the mouse. In a broader context, this novel in vivo model of neovascularization has the potential for noninvasive evaluation of angiogenesis modulating substances. PMID:28658313
Duan, Xu; Ling, Feng; Shen, Yun; Yang, Jun; Xu, Hai-ying; Tong, Xiao-shan
2013-04-01
We investigated the efficacy and safety of nitroglycerin for preventing venous spasm during contrast-guided axillary vein puncture for pacemaker or defibrillator leads implantation. A total of 40 consecutive patients referred for contrast-guided axillary vein puncture for pacemaker or defibrillator implantations were included in the study. Patients were randomly assigned to control group and nitroglycerin group. Patients in the nitroglycerin group were given 200 µg (2 mL) nitroglycerin via ipsilateral peripheral vein about 3 min before puncture. The degree of venous spasm was evaluated by the reduction in lumen calibre of the axillary vein after puncture. Mild venous spasm and severe venous spasm were defined as a reduction in lumen calibre of 50-90% and ≥ 90%, respectively. The mean degree of venous spasm of axillary vein was lower in the nitroglycerin group than in the control group (23.0 ± 22.3 vs. 45.5 ± 33.6%, P = 0.018). The incidence of mild or severe venous spasm was lower in the nitroglycerin group than in the control group (3/20 vs. 11/20, P = 0.019). In the nitroglycerin group, the systolic blood pressure had a significant decrease after puncture (129.5 ± 23.7 vs. 143.0 ± 24.1 mmHg, P = 0.003). There was no hypotension and other adverse reaction of nitroglycerin in the nitroglycerin group. Intravenous nitroglycerin is effective and safe for preventing venous spasm during contrast-guided axillary vein puncture for pacemaker or defibrillator leads implantation.
Dura-arachnoid lesions produced by 22 gauge Quincke spinal needles during a lumbar puncture
Reina, M; Lopez, A; Badorrey, V; De Andres, J A; Martin, S
2004-01-01
Aims: The dural and arachnoid hole caused by lumbar puncture needles is a determining factor in triggering headaches. The aim of this study is to assess the dimensions and morphological features of the dura mater and arachnoids when they are punctured by a 22 gauge Quincke needle having its bevel either in the parallel or in the transverse position. Methods: Fifty punctures were made with 22 gauge Quincke needles in the dural sac of four fresh cadavers using an "in vitro" model especially designed for this purpose. The punctures were performed by needles with bevels parallel or perpendicular to the spinal axis and studied under scanning electron microscopy. Results: Thirty five of the 50 punctures done by Quincke needles (19 in the external surface and 16 in the internal) were used for evaluation. When the needle was inserted with its bevel parallel to the axis of the dural sac (17 of 35), the size of the dura-arachnoid lesion was 0.032 mm2 in the epidural surface and 0.037 mm2 in the subarachnoid surface of the dural sac. When the needle's bevel was perpendicular to the axis (18 of 35) the measurement of the lesion size was 0.042 mm2 for the external surface and 0.033 mm2 for the internal. There were no statistical significant differences between these results. Conclusions: It is believed that the reported lower frequency of postdural puncture headache when the needle is inserted parallel to the cord axis should be explained by some other factors besides the size of the dura-arachnoid injury. PMID:15146008
Dura-arachnoid lesions produced by 22 gauge Quincke spinal needles during a lumbar puncture.
Reina, M A; López, A; Badorrey, V; De Andrés, J A; Martín, S
2004-06-01
The dural and arachnoid hole caused by lumbar puncture needles is a determining factor in triggering headaches. The aim of this study is to assess the dimensions and morphological features of the dura mater and arachnoids when they are punctured by a 22 gauge Quincke needle having its bevel either in the parallel or in the transverse position. Fifty punctures were made with 22 gauge Quincke needles in the dural sac of four fresh cadavers using an "in vitro" model especially designed for this purpose. The punctures were performed by needles with bevels parallel or perpendicular to the spinal axis and studied under scanning electron microscopy. Thirty five of the 50 punctures done by Quincke needles (19 in the external surface and 16 in the internal) were used for evaluation. When the needle was inserted with its bevel parallel to the axis of the dural sac (17 of 35), the size of the dura-arachnoid lesion was 0.032 mm(2) in the epidural surface and 0.037 mm(2) in the subarachnoid surface of the dural sac. When the needle's bevel was perpendicular to the axis (18 of 35) the measurement of the lesion size was 0.042 mm(2) for the external surface and 0.033 mm(2) for the internal. There were no statistical significant differences between these results. It is believed that the reported lower frequency of postdural puncture headache when the needle is inserted parallel to the cord axis should be explained by some other factors besides the size of the dura-arachnoid injury.
Zhang, Di; Chen, LingXiao; Chen, XingYu; Wang, XiaoBo; Li, YuLin; Ning, GuangZhi; Feng, ShiQing
2016-03-01
The aim of this meta-analysis was to evaluate the postdural puncture headache after spinal anesthesia with Whitacre spinal needles compared with Quincke spine needles. We searched several databases, including PubMed, Embase, ISI Web of Knowledge, and Cochrane Central Register of Controlled Trials until October 10th, 2014, for randomized controlled trials that compared spinal anesthesia with Whitacre spinal needles or Quincke spine needles for postdural puncture headache. Two reviewers independently screened the literature, assessed the risk for bias and extracted data. We used RevMan 5.3 software to perform the meta-analysis. Studies were included for the main end points if they addressed the following: frequency of postdural puncture headache, severity of postdural puncture headache as assessed by limitation of activities, and frequency of epidural blood patch. Nine randomized controlled trials were included for meta-analysis. The meta-analysis showed that spinal anesthesia with Whitacre spinal needles achieved lower incidence of postdural puncture headache(RR 0.34; 95% CI [0.22, 0.52]; P < .00001); in addition, the severity of postdural puncture headache was lower in the Whitacre spinal needle group (RR 0.32; 95% CI [0.16, 0.66]; P = .002). Furthermore, the frequency of an epidural blood patch in the Whitacre spinal needle group was lower compared with that in the Quincke spine needle group (RR 0.15; 95% CI [0.04, 0.51]; P = .002). We suggest the Whitacre spinal needles as a superior choice for spinal anesthesia compared with Quincke spine needles. © 2016 American Headache Society.
Greenstein, Eugene; Passman, Rod; Lin, Albert C; Knight, Bradley P
2012-04-01
The application of radiofrequency electrocautery to a standard, open-ended transseptal needle has been used to facilitate transseptal puncture (TSP). The purpose of this study was to determine the incidence of cardiac tissue coring when this technique is used. A model using excised swine hearts submerged in a saline-filled basin was developed to simulate TSP with electrocautery and a standard transseptal needle. Punctures were performed without the use of electrocautery and by delivering radiofrequency energy to the transseptal needle using a standard electrocautery pen at 3 target sites (fossa ovalis, non-fossa ovalis septum, and aorta). The tissue of the submerged heart was gently tented, and the needle was advanced on delivery of radiofrequency. The devices were retracted, and the needle was flushed in a collection basin. None of the TSPs without cautery caused tissue coring. For TSPs using electrocautery, the frequency of coring was at least 21% for any puncture permutation used in the study and averaged 37% at septal sites (P<0.001 compared with punctures without cautery). Tissue coring occurred in 33 of 96 (35%) punctures through the fossa ovalis and in 38 of 96 (40%) punctures through non-fossa ovalis septum. The frequency of tissue coring at aortic sites was 62 of 96 (65%), which was significantly higher than at the septal sites (P<0.001). In an animal preparation, TSP at the level of the fossa ovalis using electrocautery and a standard open-ended Brockenbrough needle resulted in coring of the septal tissue in 35% of cases (33 of 96 punctures).
Causes and Solutions of the Trampoline Effect.
Miwa, Masamiki; Ota, Noboru; Ando, Chiyono; Miyazaki, Yukio
2015-01-01
A trampoline effect may occur mainly when a buttonhole tract and the vessel flap fail to form a straight line. Certain findings, however, suggest another cause is when the vessel flap is too small. The frequency of the trampoline effect, for example, is lower when a buttonhole tract is created by multiple punctures of the arteriovenous fistula (AVF) vessel than when it is done by one-time puncture of the vessel. Lower frequency of the trampoline effect with multiple punctures of the AVF vessel may be due to enlargement of the initial puncture hole on the vessel every time the vessel is punctured with a sharp needle. Even if aiming at exactly the same point on the AVF vessel every time, the actual puncture point shifts slightly at every puncture, which potentially results in enlargement of the initial hole on the AVF vessel. Moreover, in some patients, continued use of a buttonhole tract for an extended period of time increases the frequency of the trampoline effect. In such cases, reduction of the incidence of the trampoline effect can be achieved by one buttonhole cannulation using a new dull needle with sharp side edges that is used to enlarge the vessel flap. Such single buttonhole cannulation may suggest that the increased frequency of the trampoline effect also potentially occurs in association with gradually diminishing flap size. As a final observation, dull needle insertion into a vessel flap in the reverse direction has been more smoothly achieved than insertion into a vessel flap in the conventional direction. A vessel flap in the reverse direction can be adopted clinically. © 2015 S. Karger AG, Basel.
Inexpensive homemade models for ultrasound-guided vein cannulation training.
Di Domenico, Stefano; Santori, Gregorio; Porcile, Elisa; Licausi, Martina; Centanaro, Monica; Valente, Umberto
2007-11-01
To test the hypothesis that low-cost homemade models may be used to acquire the basic skills for ultrasound-guided central vein puncture. Training study. University transplantation department. Training was performed using three different homemade models (A, B, and C). Segments of a common rubber tourniquet (V1) and Silastic tube (V2) were used to simulate vessels within agar-based models. Overall cost for each model was less than 5 euro (US$7). For each test (test I, A-V1; II, A-V2; III, B-V1; IV, C-V2), the number of punctures and attempts needed to locate the needle inside the lumen were recorded. Each test was considered completed when participants punctured the vessels at the first attempt for three consecutive times. In test I, the mean number of punctures and attempts were 3.85 +/- 1.26 and 4.95 +/- 3.05; in test II, 4.60 +/- 1.14 and 6.30 +/- 2.51; in test III, 4.80 +/- 1.06 and 4.65 +/- 2.21; and in test IV, 4.45 +/- 1.23 and 6.05 +/- 2.92, respectively. For each test, no statistical difference was found by comparison of number of punctures and attempts for anesthesiologists versus nonanesthesiologists, men versus women, or previous experience versus no experience with central vein cannulation (CVC). Video game users obtained better results than did nonusers in test I (punctures, P = 0.033; attempts, P = 0.038), test II (punctures, P = 0.052; attempts, P = 0.011), and test IV (punctures, P = 0.001; attempts, P = 0.003). A posttraining questionnaire showed favorable opinions about the clarity of the instructions, aptness of the models, and adequacy of the training. In our operative unit, the use of ultrasound guidance for CVC increased from 2% to 23% in the first month after training. Low-cost homemade models are useful in acquiring basic coordination skills for ultrasound-guided CVC.
Kwak, Dai-Soon; In, Yong; Kim, Tae Kyun; Cho, Han Suk; Koh, In Jun
2016-01-01
Despite the documented clinical efficacy of the pie-crusting technique for medial collateral ligament (MCL) release in varus total knee arthroplasty, its quantitative effects on medial gaps and safety remain unclear. This study was undertaken to determine the efficacy (quantitative effect and consistency of the number of punctures) and the safety (frequency of early over-release) of the pie-crusting technique for MCL release. From ten pairs of cadaveric knees, one knee from each pair was randomly assigned to undergo pie crusting in extension (group E) or in flexion (group F). Pie crusting was performed in the superficial MCL using a blade until over-release occurred. After every puncture, the incremental medial gap increase was recorded, and the number of punctures required for 2- or 4-mm gap increases was assessed. In group E, the extension gap increased from 0.8 to 5.0 mm and the flexion gap increased from 0.8 to 3.0 mm. In group F, the extension gap increased from 1.0 to 3.0 mm and the flexion gap increased from 2.6 to 6.0 mm. However, the gap increments were inconsistent with those that followed the preceding blade punctures, and the number of punctures required to increase the gaps by 2 or 4 mm was variable. The number of punctures leading to over-release in group E and group F was 6 ± 1 and 3 ± 1 punctures, respectively. Overall, 70% of over-release occurred earlier than the average number of punctures leading to over-release. Pie crusting led to unpredictable gap increments and to frequent early over-release. Surgeons should decide carefully before using the pie-crusting technique for MCL release and should be cautious of performing throughout the procedure, especially when performing in a flexed knee. Therapeutic study, Level I.
NASA Astrophysics Data System (ADS)
El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.
2017-06-01
Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.
Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Hunter, Craig A.
1999-01-01
An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.
Experimental study of current loss and plasma formation in the Z machine post-hole convolute
NASA Astrophysics Data System (ADS)
Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.
2017-01-01
The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.
Vascular access: the impact of ultrasonography
de Almeida, Carlos Eduardo Saldanha
2016-01-01
ABSTRACT Vascular punctures are often necessary in critically ill patients. They are secure, but not free of complications. Ultrasonography enhances safety of the procedure by decreasing puncture attempts, complications and costs. This study reviews important publications and the puncture technique using ultrasound, bringing part of the experience of the intensive care unit of the Hospital Israelita Albert Einstein, São Paulo (SP), Brazil, and discussing issues that should be considered in future studies. PMID:28076607
Praveen, Alampath; Sreekumar, Karumathil Pullara; Nazar, Puthukudiyil Kader; Moorthy, Srikanth
2012-01-01
Thoracic duct embolization (TDE) is an established radiological interventional procedure for thoracic duct injuries. Traditionally, it is done under fluoroscopic guidance after opacifying the thoracic duct with bipedal lymphangiography. We describe our experience in usinga heavily T2W sequence for guiding thoracic duct puncture and direct injection of glue through the puncture needle without cannulating the duct. PMID:23162248
[An atraumatic needle for the puncture of ports and pumps].
Haindl, H; Müller, H
1988-10-17
Huber-point needles have been found to induce substantial coring during puncture of ports or pumps, which may lead to leakage or obturation of these devices. Therefore, different types of cannulas were tested in order to evaluate their applicability for this purpose. Pencil-point needles led to increased pain during puncture and thus seemed unsuitable. A newly developed port-cannula bent inwards within the length of the bevel ("protected bevel") and proved to be definitely noncoring during electron microscopy. Consequently the force required to introduce this needle was reduced by 50% in comparison with the Huber-type needle. In addition, this cannula allowed up to 3000 punctures of one port without leakage and, thus, correspondingly therefore relevantly increased the durability of this device.
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
Witoonchart, Peerajak; Chongstitvatana, Prabhas
2017-08-01
In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Umar, A.; Yusau, B.; Ghandi, B. M.
2007-01-01
In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
The effect of second-stage pushing and body mass index on postdural puncture headache.
Franz, Amber M; Jia, Shawn Y; Bahnson, Henry T; Goel, Akash; Habib, Ashraf S
2017-02-01
To explore how pushing during labor and body mass index affect the development of postdural puncture headache in parturients who experienced dural puncture with Tuohy needles. Retrospective cohort. Obstetric ward and operating rooms at a university-affiliated hospital. One hundred ninety parturients who had witnessed dural puncture with 17 or 18 gauge Tuohy needles from 1999-2014. Patients were categorized by pushing status and body mass index (kg/m 2 ): nonobese <30, obese 30-39.99, morbidly obese 40-49.99, and super obese ≥50. Headache, number of days of headache, maximum headache score, and epidural blood patch placement. Compared with women who did not push, women who pushed during labor had increased risk of postdural puncture headache (odds ratio [OR], 2.1 [1.1-4.0]; P=.02), more days of headache (P=.02), and increased epidural blood patch placement (P=.02). Super obese patients were less likely to develop headache compared with nonobese (OR, 0.33 [0.13-0.85]; P=.02), obese (OR, 0.37 [0.14-0.98]; P=.045], and morbidly obese patients (OR, 0.20 [0.05-0.68]; P<.01). In a multivariate logistic regression model, lack of pushing (OR, 0.57 [0.29-1.10]; P=.096) and super obesity (OR, 0.41 [0.16-1.02]; P=.056] were no longer significantly associated with reduced risk of postdural puncture headache. Parturients who did not push before delivery and parturients with body mass index ≥50kg/m 2 were less likely to develop postdural puncture headache in a univariate analysis. Similar trends were demonstrated in a multivariate model, but were no longer statistically significant. Copyright © 2016 Elsevier Inc. All rights reserved.
Ezhumalai, Babu; Satheesh, Santhosh; Jayaraman, Balachander
2014-01-01
The success of transradial catheterization depends on meticulous access of radial artery which in turn depends on palpating a good radial pulse. Our objectives were to analyze the effects of subcutaneously infiltrated nitroglycerin on diameter of radial artery, palpability of radial pulse, ease-of-puncture and pre-cannulation spasm of radial artery during transradial coronary angiography. Patients undergoing transradial coronary angiography were randomized to Group NL or Group SL. In Group NL, 3 ml of solution containing nitroglycerin and lignocaine was infiltrated subcutaneously at the site intended for puncture of radial artery. Similarly, saline and lignocaine were infiltrated in Group SL. Diameter of radial artery was objectively assessed by ultrasonography. Measurements were performed at baseline and repeated at 1 min after injecting the solutions. The ease-of-puncture was evaluated by the number of punctures and the time needed for successful access of radial artery. Both groups had 100 patients each. Baseline diameter of radial artery was similar between two groups. The post-injection diameter of radial artery increased by 26.3% in Group NL and 11.4% in Group SL. Nitroglycerin significantly improved the palpability of radial pulse, reduced the number of punctures and shortened the time needed for successful access of radial artery. Pre-cannulation spasm of radial artery occurred in 1% of Group NL and 8% of Group SL. Subcutaneously infiltrated nitroglycerin leads to significant vasodilation of radial artery. This avoids pre-cannulation spasm of radial artery, enhances palpability of the radial pulse and thus makes the puncture of radial artery easier. Copyright © 2014 Cardiological Society of India. Published by Elsevier B.V. All rights reserved.
Ezhumalai, Babu; Satheesh, Santhosh; Jayaraman, Balachander
2014-01-01
Background The success of transradial catheterization depends on meticulous access of radial artery which in turn depends on palpating a good radial pulse. Objectives Our objectives were to analyze the effects of subcutaneously infiltrated nitroglycerin on diameter of radial artery, palpability of radial pulse, ease-of-puncture and pre-cannulation spasm of radial artery during transradial coronary angiography. Methods Patients undergoing transradial coronary angiography were randomized to Group NL or Group SL. In Group NL, 3 ml of solution containing nitroglycerin and lignocaine was infiltrated subcutaneously at the site intended for puncture of radial artery. Similarly, saline and lignocaine were infiltrated in Group SL. Diameter of radial artery was objectively assessed by ultrasonography. Measurements were performed at baseline and repeated at 1 min after injecting the solutions. The ease-of-puncture was evaluated by the number of punctures and the time needed for successful access of radial artery. Results Both groups had 100 patients each. Baseline diameter of radial artery was similar between two groups. The post-injection diameter of radial artery increased by 26.3% in Group NL and 11.4% in Group SL. Nitroglycerin significantly improved the palpability of radial pulse, reduced the number of punctures and shortened the time needed for successful access of radial artery. Pre-cannulation spasm of radial artery occurred in 1% of Group NL and 8% of Group SL. Conclusions Subcutaneously infiltrated nitroglycerin leads to significant vasodilation of radial artery. This avoids pre-cannulation spasm of radial artery, enhances palpability of the radial pulse and thus makes the puncture of radial artery easier. PMID:25634390
Momeni, Ali; Rouhi, Kasra; Rajabalipanah, Hamid; Abdolali, Ali
2018-04-18
Inspired by the information theory, a new concept of re-programmable encrypted graphene-based coding metasurfaces was investigated at terahertz frequencies. A channel-coding function was proposed to convolutionally record an arbitrary information message onto unrecognizable but recoverable parity beams generated by a phase-encrypted coding metasurface. A single graphene-based reflective cell with dual-mode biasing voltages was designed to act as "0" and "1" meta-atoms, providing broadband opposite reflection phases. By exploiting graphene tunability, the proposed scheme enabled an unprecedented degree of freedom in the real-time mapping of information messages onto multiple parity beams which could not be damaged, altered, and reverse-engineered. Various encryption types such as mirroring, anomalous reflection, multi-beam generation, and scattering diffusion can be dynamically attained via our multifunctional metasurface. Besides, contrary to conventional time-consuming and optimization-based methods, this paper convincingly offers a fast, straightforward, and efficient design of diffusion metasurfaces of arbitrarily large size. Rigorous full-wave simulations corroborated the results where the phase-encrypted metasurfaces exhibited a polarization-insensitive reflectivity less than -10 dB over a broadband frequency range from 1 THz to 1.7 THz. This work reveals new opportunities for the extension of re-programmable THz-coding metasurfaces and may be of interest for reflection-type security systems, computational imaging, and camouflage technology.
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
Evolving a Puncture Black Hole with Fixed Mesh Refinement
NASA Technical Reports Server (NTRS)
Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin
2004-01-01
We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.
Aerosol can puncture device operational test plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leist, K.J.
1994-05-03
Puncturing of aerosol cans is performed in the Waste Receiving and Processing Facility Module 1 (WRAP 1) process as a requirement of the waste disposal acceptance criteria for both transuranic (TRU) waste and low-level waste (LLW). These cans have contained such things as paints, lubricating oils, paint removers, insecticides, and cleaning supplies which were used in radioactive facilities. Due to Westinghouse Hanford Company (WHC) Fire Protection concerns of the baseline system`s fire/explosion proof characteristics, a study was undertaken to compare the baseline system`s design to commercially available puncturing devices. While the study found no areas which might indicate a riskmore » of fire or explosion, WHC Fire Protection determined that the puncturing system must have a demonstrated record of safe operation. This could be obtained either by testing the baseline design by an independent laboratory, or by substituting a commercially available device. As a result of these efforts, the commercially available Aerosolv can puncturing device was chosen to replace the baseline design. Two concerns were raised with the system. Premature blinding of the coalescing/carbon filter, due to its proximity to the puncture and draining operation; and overpressurization of the collection bottle due to its small volume and by blinding of the filter assembly. As a result of these concerns, testing was deemed necessary. The objective of this report is to outline test procedures for the Aerosolv.« less
Kokki, H; Hendolin, H
1996-01-01
A comparison of a 25 G with a 29 G Quincke needle was performed in paediatric day case surgery. Sixty healthy children aged 1 year to 13 years were randomly allocated to have spinal anaesthesia with either 25 G or 29 G Quincke needle without an introducer needle. There was a failure rate of 10% with the 29 G spinal needle compared with 0% with the 25 G needle. The time needed to perform dural puncture was shorter using 25 G than 29 G needle, 22 (+/- 31)(SD) vs 59 (+/- 63) s. The time taken for cerebrospinal fluid to appear at the needle hub was also longer, 4 (+/- 3) vs 8 (+/- 5) s. The number of puncture attempts was similar, 1.2 (+/- 0.6) vs 1.4 (+/- 0.8), with 25 G and 29 G needle. Low back pain, 5 vs1, and nonpositional headache, 2 vs 4, after 25 G and 29 G needles, respectively, were the most frequent postoperative complaints. Mild postdural puncture headache occurred in one eight year old male patient in the 25 G group. In conclusion, lumbar puncture without introducer needle was possible with both needles. The puncture characteristics favoured the 25 G needle. A shorter needle could partly alleviate the difficulties with the 29 G needle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kew, Jacqueline; Davies, Roger P.
2004-01-15
A new method is described for guiding hepato-portalvenous puncture using a longitudinal side-view intravascular ultrasound(L-IVUS) transducer to assist in the performance of transjugularintrahepatic portosystemic shunt (TIPS) in three Australian swine.Simultaneous L-IVUS with an AcuNav (registered) 5-10 MHz 10 Fr transducer(Acuson Corporation, Mountain View, CA, USA) and fluoroscopy guidance was used to image and monitor the hepatic to portal venous puncture,dilatation of the tract, and deployment of the TIPS stent. Flow through the shunt could be demonstrated with both L-IVUS and angiography. TIPS was successful in all swine. The time for portal vein puncture once the target portal vein was identifiedmore » was reduced at each attempt. The number of portal vein puncture attempts was 2, 1, and 1. No post-procedural complication was evident. L-IVUS-guided TIPS is practical and has the potential to improve safety by permitting simultaneous ultrasound and fluoroscopic imaging of the needle and target vascular structures. This technique allows for a more streamlined approach to TIPS, decreasing the fluoroscopic time (hence,decreasing the radiation exposure to the staff and patient) and anesthetic time. In addition, there are improved safety benefits obviating the need for wedged portography, facilitating avoidance of bile duct and hepatic arterial puncture, and minimizing hepatic injury by decreasing liver capsular puncture and the attendant risks.« less
3-Dimensional printing guide template assisted percutaneous vertebroplasty: Technical note.
Li, Jian; Lin, JiSheng; Yang, Yong; Xu, JunChuan; Fei, Qi
2018-06-01
Percutaneous vertebroplasty (PVP) is currently considered as an effective treatment for pain caused by acute osteoporotic vertebral compression fracture. Recently, puncture-related complications are increasingly reported. It's important to find a precise technique to reduce the puncture-related complications. We report a case and discussed the novel surgical technique with step-by-step operating procedures, to introduce the precise PVP assisted by a 3-dimensional printing guide template. Based on the preoperative CT scan and infrared scan data, a well-designed individual guide template could be established in a 3-dimensional reconstruction software and printed out by a 3-dimensional printer. In real operation, by matching the guide template to patient's back skin, cement needles' insertion orientation and depth were easily established. Only 14 times C-arm fluoroscopy with HDF mode (total exposure dose was 4.5 mSv) were required during the procedure. The operation took only 17 min. Cement distribution in the vertebral body was very good without any puncture-related complications. Pain was significantly relieved after surgery. In conclusion, the novel precise 3-dimensional printing guide template system may allow (1) comprehensive visualization of the fractured vertebral body and the individual surgical planning, (2) the perfect fitting between skin and guide template to ensure the puncture stability and accuracy, and (3) increased puncture precision and decreased puncture-related complications, surgical time and radiation exposure. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2017-01-01
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches. PMID:28771497
Müller, H; Zierski, J
1988-10-03
Huber-point needles, which are thought to be noncoring, are usually recommended for puncture of implanted drug-delivery devices, such as ports and pumps. Nevertheless, we found occlusion by silicone chips deriving from the silicone inlet septum to be a major technical complication. Electron microscopic investigations demonstrated substantial loss of material from the port membrane after repeated puncture with this type of needle. During an in vitro test, multiple puncture with Huber-type cannulas led to a pressure-dependent leakage of a port after only 150 to 750 insertions of a needle. In addition, the forces necessary for puncture or for withdrawal of the needle were increased with Huber-point needles, possibly due to a coring effect. Another disadvantage of the available port needles is the formation of a hook at the tip, which may lead to additional lesion of the port or pump membrane. In our opinion, resterilization of Huber needles, recommended by the manufactures, is not advisable, because it is well known that safe sterilization of small lumina, e.g., the lumen of the needle, is impossible.
Embolization of an Internal Iliac Artery Aneurysm after Image-Guided Direct Puncture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heye, S., E-mail: sam.heye@uzleuven.be; Vaninbroukx, J.; Daenens, K.
2012-08-15
Objective: To evaluate the feasibility, safety, and efficacy of embolization of internal iliac artery aneurysm (IIAA) after percutaneous direct puncture under (cone-beam) computed tomography (CT) guidance. Methods: A retrospective case series of three patients, in whom IIAA not accessible by way of the transarterial route, was reviewed. CT-guided puncture of the IIAA sac was performed in one patient. Two patients underwent puncture of the IIAA under cone-beam CT guidance. Results: Access to the IIAA sac was successful in all three patients. In two of the three patients, the posterior and/or anterior division was first embolized using platinum microcoils. The aneurysmmore » sac was embolized with thrombin in one patient and with a mixture of glue and Lipiodol in two patients. No complications were seen. On follow-up CT, no opacification of the aneurysm sac was seen. The volume of one IIAA remained stable at follow-up, and the remaining two IIAAs decreased in size. Conclusion: Embolization of IIAA after direct percutaneous puncture under cone-beam CT/CT-guidance is feasible and safe and results in good short-term outcome.« less
MUSIC: MUlti-Scale Initial Conditions
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Abel, Tom
2013-11-01
MUSIC generates multi-scale initial conditions with multiple levels of refinements for cosmological ‘zoom-in’ simulations. The code uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). MUSIC achieves rms relative errors of the order of 10-4 for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier space-induced interference ringing.
Classification of galaxy type from images using Microsoft R Server
NASA Astrophysics Data System (ADS)
de Vries, Andrie
2017-06-01
Many astronomers working in the field of AstroInformatics write code as part of their work. Although the programming language of choice is Python, a small number (8%) use R. R has its specific strengths in the domain of statistics, and is often viewed as limited in the size of data it can handle. However, Microsoft R Server is a product that removes these limitations by being able to process much larger amounts of data. I present some highlights of R Server, by illustrating how to fit a convolutional neural network using R. The specific task is to classify galaxies, using only images extracted from the Sloan Digital Skyserver.
A VLSI pipeline design of a fast prime factor DFT on a finite field
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Shao, H. M.; Reed, I. S.; Shyu, H. C.
1986-01-01
A conventional prime factor discrete Fourier transform (DFT) algorithm is used to realize a discrete Fourier-like transform on the finite field, GF(q sub n). A pipeline structure is used to implement this prime factor DFT over GF(q sub n). This algorithm is developed to compute cyclic convolutions of complex numbers and to decode Reed-Solomon codes. Such a pipeline fast prime factor DFT algorithm over GF(q sub n) is regular, simple, expandable, and naturally suitable for VLSI implementation. An example illustrating the pipeline aspect of a 30-point transform over GF(q sub n) is presented.
Capacity of noncoherent MFSK channels
NASA Technical Reports Server (NTRS)
Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.
1974-01-01
Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.
Convolution of large 3D images on GPU and its decomposition
NASA Astrophysics Data System (ADS)
Karas, Pavel; Svoboda, David
2011-12-01
In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.
... Test is Performed The test is used to evaluate respiratory diseases and conditions that affect the lungs. ... may include: Bleeding at the puncture site Blood flow problems at puncture site (rare) Bruising at the ...
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Recent Advances in Thermoplastic Puncture-Healing Polymers
NASA Technical Reports Server (NTRS)
Gordon, K. L.; Working, D. C.; Wise, K. E.; Bogert, P. B.; Britton, S. M.; Topping, C.C.; Smith, J. Y.; Siochi, E. J.
2009-01-01
Self-healing materials provide a route for enhanced damage tolerance in materials for aerospace applications. In particular, puncture-healing upon impact has the potential to mitigate significant damage caused by high velocity micrometeoroid impacts. This type of material also has the potential to improve damage tolerance in load bearing structures to enhance vehicle health and aircraft durability. The materials being studied are those capable of instantaneous puncture healing, providing a mechanism for mechanical property retention in lightweight structures. These systems have demonstrated healing capability following penetration of fast moving projectiles -- velocities that range from 9 mm bullets shot from a gun (approx.330 m/sec) to close to micrometeoroid debris velocities of 4800 m/sec. In this presentation, we report on a suite of polymeric materials possessing this characteristic. Figure 1 illustrates the puncture healing concept. Puncture healing in these materials is dependent upon how the combination of a polymer's viscoelastic properties responds to the energy input resulting from the puncture event. Projectile penetration increases the temperature in the vicinity of the impact. Self-healing behavior occurs following puncture, whereby energy must be transferred to the material during impact both elastically and inelastically, thus establishing two requirements for puncture healing to occur: a.) The need for the puncture event to produce a local melt state in the polymer material and b.) The molten material has to have sufficient melt elasticity to snap back and close the hole. 1,2 Previous ballistic testing studies revealed that Surlyn materials warmed up to a temperature approx.98 C during projectile puncture (3 C higher than it s melting temperature). 1,2 The temperature increase produces a localized flow state and the melt elasticity to snap back thus sealing the hole. Table 1 lists the commercially polymers studied here, together with their physical properties. The polymers were selected based on chemical structure, tensile strengths, tensile moduli, glass transition temperature, melting temperatures, and impact strength. The thermal properties of the polymers were characterized by Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA). Mechanical properties were assessed by a Sintech 2W instron according to ASTM D1708 or D638 at crosshead speeds of 5.08 cm/min. 7.6 cm x 7.6 cm panels of the different materials were prepared and ballistic testing was performed at various temperatures. The panels were shot with a .223 caliber semiautomatic rifle from a distance of 23 meters at various temperatures. Chronographs were used to measure initial and final bullet velocity. Temperatures at the site of impact were measured using a FLIR ThermaCAM S60 thermal camera. A Vision Research model Phantom 9 high speed video camera was used to capture high speed video footage of ballistics testing.
NASA Technical Reports Server (NTRS)
Mcmaster, L. R.; Peterson, S. T.; Hughes, F. M. (Inventor)
1973-01-01
A meteoroid detector is described which uses, a cold cathode discharge tube with a gas-pressurized cell in space for recording a meteoroid puncture of the cell and for determining the size of the puncture.
Analytic convergence of harmonic metrics for parabolic Higgs bundles
NASA Astrophysics Data System (ADS)
Kim, Semin; Wilkin, Graeme
2018-04-01
In this paper we investigate the moduli space of parabolic Higgs bundles over a punctured Riemann surface with varying weights at the punctures. We show that the harmonic metric depends analytically on the weights and the stable Higgs bundle. This gives a Higgs bundle generalisation of a theorem of McOwen on the existence of hyperbolic cone metrics on a punctured surface within a given conformal class, and a generalisation of a theorem of Judge on the analytic parametrisation of these metrics.
Postdural puncture headache: a study with 256 Quincke needle.
Singh, N Ratan; Singh, H Shanti
2010-02-01
The incidence of postdural puncture headache, its severity, time of onset and duration following spinal anaesthesia in female subjects using 25 gauge Quincke needles are discussed in this paper. Postdural puncture headache was seen in only 3% of the cases. The headache appeared mainly on the 1st postoperative day and was associated with nausea and vomiting in one case; and it disappeared by the 2nd to 3rd day following administration of mild analgesics and anti-emetics.
Rouabah, K; Varoquaux, A; Caporossi, J M; Louis, G; Jacquier, A; Bartoli, J M; Moulin, G; Vidal, V
2016-11-01
The purpose of this study was to assess the feasibility and utility of image fusion (Easy-TIPS) obtained from pre-procedure CT angiography and per-procedure real-time fluoroscopy for portal vein puncture during transjugular intrahepatic portosystemic shunt (TIPS) placement. Eighteen patients (15 men, 3 women) with a mean age of 63 years (range: 48-81 years; median age, 65 years) were included in the study. All patients underwent TIPS placement by two groups of radiologists (one group with radiologists of an experience<3 years and one with an experience≥3 years) using fusion imaging obtained from three-dimensional computed tomography angiography of the portal vein and real-time fluoroscopic images of the portal vein. Image fusion was used to guide the portal vein puncture during TIPS placement. At the end of the procedure, the interventional radiologists evaluated the utility of fusion imaging for portal vein puncture during TIPS placement. Mismatch between three-dimensional computed tomography angiography and real-time fluoroscopic images of the portal vein on image fusion was quantitatively analyzed. Posttreatment CT time, number of the puncture attempts, total radiation exposure and radiation from the retrograde portography were also recorded. Image fusion was considered useful for portal vein puncture in 13/18 TIPS procedures (72%). The mean posttreatment time to obtain fusion images was 16.4minutes. 3D volume rendered CT angiography images was strictly superimposed on direct portography in 10/18 procedures (56%). The mismatch mean value was 0.69cm in height and 0.28cm laterally. A mean number of 4.6 portal vein puncture attempts was made. Eight patients required less than three attempts. The mean radiation dose from retrograde portography was 421.2dGy.cm 2 , corresponding to a mean additional exposure of 19%. Fusion imaging resulting from image fusion from pre-procedural CT angiography is feasible, safe and makes portal puncture easier during TIPS placement. Copyright © 2016 Editions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Lima, Estevao; Rodrigues, Pedro L; Mota, Paulo; Carvalho, Nuno; Dias, Emanuel; Correia-Pinto, Jorge; Autorino, Riccardo; Vilaça, João L
2017-10-01
Puncture of the renal collecting system represents a challenging step in percutaneous nephrolithotomy (PCNL). Limitations related to the use of standard fluoroscopic-based and ultrasound-based maneuvers have been recognized. To describe the technique and early clinical outcomes of a novel navigation system for percutaneous kidney access. This was a proof-of-concept study (IDEAL phase 1) conducted at a single academic center. Ten PCNL procedures were performed for patients with kidney stones. Flexible ureterorenoscopy was performed to determine the optimal renal calyx for access. An electromagnetic sensor was inserted through the working channel. Then the selected calyx was punctured with a needle with a sensor on the tip guided by real-time three-dimensional images observed on the monitor. The primary endpoints were the accuracy and clinical applicability of the system in clinical use. Secondary endpoints were the time to successful puncture, the number of attempts for successful puncture, and complications. Ten patients were enrolled in the study. The median age was 47.1 yr (30-63), median body mass index was 22.85kg/m 2 (19-28.3), and median stone size was 2.13cm (1.5-2.5cm). All stones were in the renal pelvis. The Guy's stone score was 1 in nine cases and 2 in one case. All 10 punctures of the collecting system were successfully completed at the first attempt without X-ray exposure. The median time to successful puncture starting from insertion of the needle was 20 s (range 15-35). No complications occurred. We describe the first clinical application of a novel navigation system using real-time electromagnetic sensors for percutaneous kidney access. This new technology overcomes the intrinsic limitations of traditional methods of kidney access, allowing safe, precise, fast, and effective puncture of the renal collecting system. We describe a new technology allowing safe and easy puncture of the kidney without radiation exposure. This could significantly facilitate one of the most challenging steps in percutaneous removal of kidney stones. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Development and application of deep convolutional neural network in target detection
NASA Astrophysics Data System (ADS)
Jiang, Xiaowei; Wang, Chunping; Fu, Qiang
2018-04-01
With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2015-06-01
A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
A spectral nudging method for the ACCESS1.3 atmospheric model
NASA Astrophysics Data System (ADS)
Uhe, P.; Thatcher, M.
2014-10-01
A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.
... aid/first-aid-puncture-wounds/basics/ART-20056665 . Mayo Clinic Footer Legal Conditions and Terms Any use of ... Privacy Practices Notice of Nondiscrimination Manage Cookies Advertising Mayo Clinic is a not-for-profit organization and proceeds ...
Lumbar Puncture (Spinal Tap) (For Parents)
... specific bacteria growing in the sample, a bacterial culture is sent to the lab and these results ... treatment while waiting for the results of the culture. Risks A lumbar puncture is considered a safe ...
Simple method to set up low eccentricity initial data for moving puncture simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichy, Wolfgang; Marronetti, Pedro
2011-01-15
We introduce two new eccentricity measures to analyze numerical simulations. Unlike earlier definitions these eccentricity measures do not involve any free parameters which makes them easy to use. We show how relatively inexpensive grid setups can be used to estimate the eccentricity during the early inspiral phase. Furthermore, we compare standard puncture data and post-Newtonian data in ADMTT gauge. We find that both use different coordinates. Thus low eccentricity initial momentum parameters for a certain separation measured in ADMTT coordinates are hard to use in puncture data, because it is not known how the separation in puncture coordinates is relatedmore » to the separation in ADMTT coordinates. As a remedy we provide a simple approach which allows us to iterate the momentum parameters until our numerical simulations result in acceptably low eccentricities.« less
Cranial nerve VI palsy after dural-arachnoid puncture.
Hofer, Jennifer E; Scavone, Barbara M
2015-03-01
In this article, we provide a literature review of cranial nerve (CN) VI injury after dural-arachnoid puncture. CN VI injury is rare and ranges in severity from diplopia to complete lateral rectus palsy with deviated gaze. The proposed mechanism of injury is cerebrospinal fluid leakage causing intracranial hypotension and downward displacement of the brainstem. This results in traction on CN VI leading to stretch and neural demyelination. Symptoms may present 1 day to 3 weeks after dural-arachnoid puncture and typically are associated with a postdural puncture (spinal) headache. Resolution of symptoms may take weeks to months. Use of small-gauge, noncutting spinal needles may decrease the risk of intracranial hypotension and subsequent CN VI injury. When ocular symptoms are present, early administration of an epidural blood patch may decrease morbidity or prevent progression of ocular symptoms.
Protective materials with real-time puncture detection capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hermes, R.E.; Stampfer, J.F.; Valdez-Boyle, L.S.
1996-08-01
The protection of workers from chemical, biological, or radiological hazards requires the use of protective materials that can maintain their integrity during use. An accidental puncture in the protective material can result in a significant exposure to the worker. A five ply material has been developed that incorporates two layers of an electrically conductive polymer sandwiched between three layers of a nonconductive polymer. A normally open circuit that is connected between the conductive layers will be closed by puncturing the material with either a conductive or nonconductive object. This can be used to activate an audible alarm or visual beaconmore » to warn the worker of a breach in the integrity of the material. The worker is not connected to the circuit, and the puncture can be detected in real-time, even when caused by a nonconductor.« less
Spinal needle force monitoring during lumbar puncture using fiber Bragg grating force device.
Ambastha, Shikha; Umesh, Sharath; Dabir, Sundaresh; Asokan, Sundarrajan
2016-11-01
A technique for real-time dynamic monitoring of force experienced by a spinal needle during lumbar puncture using a fiber Bragg grating (FBG) sensor is presented. The proposed FBG force device (FBGFD) evaluates the compressive force on the spinal needle during lumbar puncture, particularly avoiding the bending effect on the needle. The working principle of the FBGFD is based on transduction of force experienced by the spinal needle into strain variations monitored by the FBG sensor. FBGFD facilitates external mounting of a spinal needle for its smooth insertion during lumbar puncture without any intervention. The developed FBGFD assists study and analysis of the force required for the spinal needle to penetrate various tissue layers from skin to the epidural space; this force is indicative of the varied resistance offered by different tissue layers for the spinal needle traversal. Calibration of FBGFD is performed on a micro-universal testing machine for 0 to 20 N range with an obtained resolution of 0.021 N. The experimental trials using spinal needles mounted on FBGFD are carried out on a human cadaver specimen with punctures made in the lumbar region from different directions. Distinct forces are recorded when the needle encounters skin, muscle tissue, and a bone in its traversing path. Real-time spinal needle force monitoring using FBGFD may reduce potentially serious complications during the lumbar puncture, such as overpuncturing of tissue regions, by impeding the spinal needle insertion at epidural space.
Ultrasound-guided, minimally invasive, percutaneous needle puncture treatment for tennis elbow.
Zhu, Jiaan; Hu, Bing; Xing, Chunyan; Li, Jia
2008-10-01
This report evaluates the efficacy of percutaneous needle puncture under sonographic guidance in treating lateral epicondylitis (tennis-elbow). Ultrasound-guided percutaneous needle puncture was performed on 76 patients who presented with persistent elbow pain. Under a local anesthetic and sonographic guidance, a needle was advanced into the calcification foci and the calcifications were mechanically fragmented. This was followed by a local injection of 25 mg prednisone acetate and 1% lidocaine. If no calcification was found then multiple punctures were performed followed by local injection of 25 mg prednisone acetate and 1% lidocaine. A visual analog scale (VAS) was used to evaluate the degree of pain pre-and posttreatment at 1 week to 24 weeks. Elbow function improvement and degree of self-satisfaction were also evaluated. Of the 76 patients, 55% were rated with excellent treatment outcome, 32% good, 11% average, and 3% poor. From 3 weeks posttreatment, VAS scores were significantly reduced compared with the pretreatment score (P<0.05) and continued to gradually decline up to 24 weeks posttreatment. Sonography demonstrated that the calcified lesions disappeared completely in 13% of the patients, were reduced in 61% of the patients, and did not change in 26% of the patients. Color Doppler flow signal used to assess hemodynamic changes showed a significant improvement after treatment in most patients. Ultrasound-guided percutaneous needle puncture is an effective and minimally invasive treatment for tennis elbow. Sonography can be used to accurately identify the puncture location and monitor changes.
[Paresthesia and spinal anesthesia for cesarean section: comparison of patient positioning].
Palacio Abizanda, F J; Reina, M A; Fornet, I; López, A; López López, M A; Morillas Sendín, P
2009-01-01
To determine the incidence of paresthesia during lumbar puncture performed with the patient in different positions. A single-blind prospective study of patients scheduled for elective cesarean section, randomized to 3 groups. In group 1 patients were seated in the direction of the long axis of the table, with heels resting on the table. In group 2 they were seated perpendicular to the long axis of the table, with legs hanging from the table. In group 3 they were in left lateral decubitus position. Lumbar punctures were performed with a 27-gauge Whitacre needle. One hundred sixty-eight patients (56 per group) were enrolled. Paresthesia occurred most often in group 3 (P = .009). We observed no differences in blood pressure after patients moved from decubitus position to the assigned position. Nor did we observe between-group differences in blood pressure according to position taken during puncture. Puncture undertaken with the patient seated, heels on the table and knees slightly bent, is associated with a lower incidence of paresthesia than puncture performed with the patient seated, legs hanging from the table. Placing the patient's heels on the table requires hip flexion and leads to anterior displacement of nerve roots in the dural sac. Such displacement would increase the nerve-free zone on the posterior side of the sac, thereby decreasing the likelihood of paresthesia during lumbar puncture. A left lateral decubitus position would increase the likelihood of paresthesia, possibly because the anesthetist may inadvertently not follow the medial line when inserting the needle.
Spinal needle force monitoring during lumbar puncture using fiber Bragg grating force device
NASA Astrophysics Data System (ADS)
Ambastha, Shikha; Umesh, Sharath; Dabir, Sundaresh; Asokan, Sundarrajan
2016-11-01
A technique for real-time dynamic monitoring of force experienced by a spinal needle during lumbar puncture using a fiber Bragg grating (FBG) sensor is presented. The proposed FBG force device (FBGFD) evaluates the compressive force on the spinal needle during lumbar puncture, particularly avoiding the bending effect on the needle. The working principle of the FBGFD is based on transduction of force experienced by the spinal needle into strain variations monitored by the FBG sensor. FBGFD facilitates external mounting of a spinal needle for its smooth insertion during lumbar puncture without any intervention. The developed FBGFD assists study and analysis of the force required for the spinal needle to penetrate various tissue layers from skin to the epidural space; this force is indicative of the varied resistance offered by different tissue layers for the spinal needle traversal. Calibration of FBGFD is performed on a micro-universal testing machine for 0 to 20 N range with an obtained resolution of 0.021 N. The experimental trials using spinal needles mounted on FBGFD are carried out on a human cadaver specimen with punctures made in the lumbar region from different directions. Distinct forces are recorded when the needle encounters skin, muscle tissue, and a bone in its traversing path. Real-time spinal needle force monitoring using FBGFD may reduce potentially serious complications during the lumbar puncture, such as overpuncturing of tissue regions, by impeding the spinal needle insertion at epidural space.
Cui, Zhenyu; Gao, Yanjun; Yang, Wenzeng; Zhao, Chunli; Ma, Tao; Shi, Xiaoqiang
2018-01-01
To evaluate the therapeutic effects of visual standard channel combined with F4.8 visual puncture super-mini percutaneous nephrolithotomy (SMP) on multiple renal calculi. The clinical data of 46 patients with multiple renal calculi treated in Affiliated Hospital of Hebei University from October 2015 to September 2016 were retrospectively analyzed. There were 28 males and 18 females aged from 25 to 65 years old, with an average of 42.6. The stone diameters were 3.0-5.2 cm, (4.3 ± 0.8) cm on average. F4.8 visual puncture-assisted balloon expansion was used to establish a standard channel. After visible stones were removed through nephroscopy combined with ultrasound lithotripsy, the stones of other parts were treated through F4.8 visual puncture SMP with holmium laser. Indices such as the total time of channel establishment, surgical time, decreased value of hemoglobin, phase-I stone clearance rate and surgical complications were summarized. Single standard channel was successfully established in all cases with the assistance of F4.8 visual puncture, of whom 24 were combined with a single microchannel, 16 were combined with double microchannels, and six were combined with three microchannels. All patients were placed with nephrostomy tube which was not placed in the microchannels. Both F5 double J tubes were placed after surgery. The time for establishing a standard channel through F4.8 visual puncture was (6.8 ± 1.8) min, and that for establishing a single F4.8 visual puncture microchannel was (4.5 ± 0.9) min. The surgical time was (92 ± 15) min. The phase-I stone clearance rate was 91.3% (42/46), and the decreased value of hemoglobin was (12.21 ± 2.5) g/L. There were 8 cases of postoperative fever which was relieved after anti-inflammatory treatment. Four cases had 0.5-0.8 cm of stone residue in the lower calyx, and all stones were discharged one month after surgery by in vitro shock wave lithotripsy combined with position nephrolithotomy, without stone streets, delayed bleeding, peripheral organ damage or urethral injury. Combining visual standard channel with F4.8 visual puncture SMP for the treatment of multiple renal calculi had the advantages of reducing the number of large channels, high rate of stone clearance, safety and reliability and mild complications. The established F4.8 visual puncture channel was safer and more accurate.
Software package for performing experiments about the convolutionally encoded Voyager 1 link
NASA Technical Reports Server (NTRS)
Cheng, U.
1989-01-01
A software package enabling engineers to conduct experiments to determine the actual performance of long constraint-length convolutional codes over the Voyager 1 communication link directly from the Jet Propulsion Laboratory (JPL) has been developed. Using this software, engineers are able to enter test data from the Laboratory in Pasadena, California. The software encodes the data and then sends the encoded data to a personal computer (PC) at the Goldstone Deep Space Complex (GDSC) over telephone lines. The encoded data are sent to the transmitter by the PC at GDSC. The received data, after being echoed back by Voyager 1, are first sent to the PC at GDSC, and then are sent back to the PC at the Laboratory over telephone lines for decoding and further analysis. All of these operations are fully integrated and are completely automatic. Engineers can control the entire software system from the Laboratory. The software encoder and the hardware decoder interface were developed for other applications, and have been modified appropriately for integration into the system so that their existence is transparent to the users. This software provides: (1) data entry facilities, (2) communication protocol for telephone links, (3) data displaying facilities, (4) integration with the software encoder and the hardware decoder, and (5) control functions.
Bilinear Convolutional Neural Networks for Fine-grained Visual Recognition.
Lin, Tsung-Yu; RoyChowdhury, Aruni; Maji, Subhransu
2017-07-04
We present a simple and effective architecture for fine-grained recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs are related to orderless texture representations built on deep features but can be trained in an end-to-end manner. Our most accurate model obtains 84.1%, 79.4%, 84.5% and 91.3% per-image accuracy on the Caltech-UCSD birds [66], NABirds [63], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at http://vis-www.cs.umass.edu/bcnn.
Semantic Segmentation of Indoor Point Clouds Using Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Babacan, K.; Chen, L.; Sohn, G.
2017-11-01
As Building Information Modelling (BIM) thrives, geometry becomes no longer sufficient; an ever increasing variety of semantic information is needed to express an indoor model adequately. On the other hand, for the existing buildings, automatically generating semantically enriched BIM from point cloud data is in its infancy. The previous research to enhance the semantic content rely on frameworks in which some specific rules and/or features that are hand coded by specialists. These methods immanently lack generalization and easily break in different circumstances. On this account, a generalized framework is urgently needed to automatically and accurately generate semantic information. Therefore we propose to employ deep learning techniques for the semantic segmentation of point clouds into meaningful parts. More specifically, we build a volumetric data representation in order to efficiently generate the high number of training samples needed to initiate a convolutional neural network architecture. The feedforward propagation is used in such a way to perform the classification in voxel level for achieving semantic segmentation. The method is tested both for a mobile laser scanner point cloud, and a larger scale synthetically generated data. We also demonstrate a case study, in which our method can be effectively used to leverage the extraction of planar surfaces in challenging cluttered indoor environments.
Development of Needle Insertion Manipulator for Central Venous Catheterization
NASA Astrophysics Data System (ADS)
Kobayashi, Yo; Hong, Jaesung; Hamano, Ryutaro; Hashizume, Makoto; Okada, Kaoru; Fujie, Masakatsu G.
Central venous catheterization is a procedure, which a doctor insert a catheter into the patient’s vein for transfusion. Since there are risks of bleeding from arterial puncture or pneumothorax from pleural puncture. Physicians are strictly required to make needle reach up into the vein and to stop the needle in the middle of vein. We proposed a robot system for assisting the venous puncture, which can relieve the difficulties in conventional procedure, and the risks of complication. This paper reports the design structuring and experimental results of needle insertion manipulator. First, we investigated the relationship between insertion force and angle into the vein. The results indicated that the judgment of perforation using the reaction force is possible in case where the needling angle is from 10 to 20 degree. The experiment to evaluate accuracy of the robot also revealed that it has beyond 0.5 mm accuracy. We also evaluated the positioning accuracy in the ultrasound images. The results displays that the accuracy is beyond 1.0 mm and it has enough for venous puncture. We also carried out the venous puncture experiment to the phantom and confirm our manipulator realized to make needle reach up into the vein.
Wallace, John R; Mangas, Kirstie M; Porter, Jessica L; Marcsisin, Renee; Pidot, Sacha J; Howden, Brian; Omansen, Till F; Zeng, Weiguang; Axford, Jason K; Johnson, Paul D R; Stinear, Timothy P
2017-04-01
Addressing the transmission enigma of the neglected disease Buruli ulcer (BU) is a World Health Organization priority. In Australia, we have observed an association between mosquitoes harboring the causative agent, Mycobacterium ulcerans, and BU. Here we tested a contaminated skin model of BU transmission by dipping the tails from healthy mice in cultures of the causative agent, Mycobacterium ulcerans. Tails were exposed to mosquito (Aedes notoscriptus and Aedes aegypti) blood feeding or punctured with sterile needles. Two of 12 of mice with M. ulcerans contaminated tails exposed to feeding A. notoscriptus mosquitoes developed BU. There were no mice exposed to A. aegypti that developed BU. Eighty-eight percent of mice (21/24) subjected to contaminated tail needle puncture developed BU. Mouse tails coated only in bacteria did not develop disease. A median incubation time of 12 weeks, consistent with data from human infections, was noted. We then specifically tested the M. ulcerans infectious dose-50 (ID50) in this contaminated skin surface infection model with needle puncture and observed an ID50 of 2.6 colony-forming units. We have uncovered a biologically plausible mechanical transmission mode of BU via natural or anthropogenic skin punctures.
NASA Astrophysics Data System (ADS)
Bilal, Adel; Gervais, Jean-Loup
A class of punctured constant curvature Riemann surfaces, with boundary conditions similar to those of the Poincaré half plane, is constructed. It is shown to describe the scattering of particle-like objects in two Euclidian dimensions. The associated time delays and classical phase shifts are introduced and connected to the behaviour of the surfaces at their punctures. For each such surface, we conjecture that the time delays are partial derivatives of the phase shift. This type of relationship, already known to be correct in other scattering problems, leads to a general integrability condition concerning the behaviour of the metric in the neighbourhood of the punctures. The time delays are explicitly computed for three punctures, and the conjecture is verified. The result, reexpressed as a product of Riemann zeta-functions, exhibits an intringuing number-theoretic structure: a p-adic product formula holds and one of Ramanujan's identities applies. An ansatz is given for the corresponding exact quantum S-matrix. It is such that the integrability condition is replaced by a finite difference relation only involving the exact spectrum already derived, in the associated Liouville field theory, by Gervais and Neveu.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
Laparoscopic Removal of a Large Ovarian Mass Utilizing Planned Trocar Puncture
2012-01-01
Background: Large cystic ovarian masses pose technical challenges to the laparoscopic surgeon. Removing large, potentially malignant specimens must be done with care to avoid the leakage of cyst fluid into the abdominal cavity. Case: We present the case of a large ovarian cystic mass treated laparoscopically with intentional trocar puncture of the mass to drain and remove the mass. Discussion: Large cystic ovarian masses can be removed laparoscopically with intentional trocar puncture of the mass to facilitate removal without leakage of cyst fluid. PMID:22906344
Research on Formation of Microsatellite Communication with Genetic Algorithm
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication. PMID:24078796
Research on formation of microsatellite communication with genetic algorithm.
Wu, Guoqiang; Bai, Yuguang; Sun, Zhaowei
2013-01-01
For the formation of three microsatellites which fly in the same orbit and perform three-dimensional solid mapping for terra, this paper proposes an optimizing design method of space circular formation order based on improved generic algorithm and provides an intersatellite direct spread spectrum communication system. The calculating equation of LEO formation flying satellite intersatellite links is guided by the special requirements of formation-flying microsatellite intersatellite links, and the transmitter power is also confirmed throughout the simulation. The method of space circular formation order optimizing design based on improved generic algorithm is given, and it can keep formation order steady for a long time under various absorb impetus. The intersatellite direct spread spectrum communication system is also provided. It can be found that, when the distance is 1 km and the data rate is 1 Mbps, the input wave matches preferably with the output wave. And LDPC code can improve the communication performance. The correct capability of (512, 256) LDPC code is better than (2, 1, 7) convolution code, distinctively. The design system can satisfy the communication requirements of microsatellites. So, the presented method provides a significant theory foundation for formation-flying and intersatellite communication.
Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality.
Li, Zhongyu; Butler, Erik; Li, Kang; Lu, Aidong; Ji, Shuiwang; Zhang, Shaoting
2018-02-12
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng
2015-12-01
Prospective nonrandomized control study.The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED).Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory.The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic times, and the secondary endpoint was location time and operation time.A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture times were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P < 0.001). The fluoroscopic times were 14.03 ± 2.54 in Group A and 25.19 ± 4.28 in Group B (P < 0.001). The preoperative location time was 4.67 ± 1.41 minutes in Group A and 6.98 ± 0.94 minutes in Group B (P < 0.001). The operation time was 79.42 ± 10.15 minutes in Group A and 89.65 ± 14.06 minutes in Group B (P = 0.002). The hospital stay was 2.77 ± 0.95 days in Group A and 2.87 ± 1.02 days in Group B (P = 0.702). There were no significant differences in the complication rate between the 2 groups (P = 0.386).The highlight of HELLO system is accurate preoperative location and definite trajectory. This preliminary report indicated that the HELLO system significantly improves the puncture accuracy of PTED and reduces the fluoroscopic times, preoperative location time, as well as operation time. (ChiCTR-ICR-15006730).
Needle puncture in rabbit functional spinal units alters rotational biomechanics.
Hartman, Robert A; Bell, Kevin M; Quan, Bichun; Nuzhao, Yao; Sowa, Gwendolyn A; Kang, James D
2015-04-01
An in vitro biomechanical study for rabbit lumbar functional spinal units (FSUs) using a robot-based spine testing system. To elucidate the effect of annular puncture with a 16 G needle on mechanical properties in flexion/extension, axial rotation, and lateral bending. Needle puncture of the intervertebral disk has been shown to alter mechanical properties of the disk in compression, torsion, and bending. The effect of needle puncture in FSUs, where intact spinal ligaments and facet joints may mitigate or amplify these changes in the disk, on spinal motion segment stability subject to physiological rotations remains unknown. Rabbit FSUs were tested using a robot testing system whose force/moment and position precision were assessed to demonstrate system capability. Flexibility testing methods were developed by load-to-failure testing in flexion/extension, axial rotation, and lateral bending. Subsequent testing methods were used to examine a 16 G needle disk puncture and No. 11 blade disk stab (positive control for mechanical disruption). Flexibility testing was used to assess segmental range-of-motion (degrees), neutral zone stiffness (N m/degrees) and width (degrees and N m), and elastic zone stiffness before and after annular injury. The robot-based system was capable of performing flexibility testing on FSUs-mean precision of force/moment measurements and robot system movements were <3% and 1%, respectively, of moment-rotation target values. Flexibility moment targets were 0.3 N m for flexion and axial rotation and 0.15 N m for extension and lateral bending. Needle puncture caused significant (P<0.05) changes only in flexion/extension range-of-motion and neutral zone stiffness and width (N m) compared with preintervention. No. 11 blade-stab significantly increased range-of-motion in all motions, decreased neutral zone stiffness and width (N m) in flexion/extension, and increased elastic zone stiffness in flexion and lateral bending. These findings suggest that disk puncture and stab can destabilize FSUs in primary rotations.
Hollow mandrin facilitates external ventricular drainage placement.
Heese, O; Regelsberger, J; Kehler, U; Westphal, M
2005-07-01
Placement of ventricular catheters is a routine procedure in neurosurgery. Ventricle puncture is done using a flexible ventricular catheter stabilised by a solid steel mandrin in order to improve stability during brain penetration. A correct catheter placement is confirmed after removing the solid steel mandrin by observation of cerebrospinal fluid (CSF) flow out of the flexible catheter. Incorrect placement makes further punctures necessary. The newly developed device allows CSF flow observation during the puncture procedure and in addition precise intracranial pressure (ICP) measurement. The developed mandrin is hollow with a blunt tip. On one side 4-5 small holes with a diameter of 0.8 mm are drilled corresponding exactly with the holes in the ventricular catheter, allowing CSF to pass into the hollow mandrin as soon as the ventricle is reached. By connecting a small translucent tube at the distal portion of the hollow mandrin ICP can be measured without loss of CSF. The system has been used in 15 patients with subarachnoid haemorrhage (SAH) or intraventricular haemeorrhage (IVH) and subsequent hydrocephalus. The new system improved the external ventricular drainage implantation procedure. In all 15 patients catheter placement was correct. ICP measurement was easy to perform immediately at ventricle puncture. In 4 patients at puncture no spontaneous CSF flow was observed, therefore by connecting a syringe and gentle aspiration of CSF correct placement was confirmed in this unexpected low pressure hydrocephalus. Otherwise by using the conventional technique further punctures would have been necessary. Advantages of the new technique are less puncture procedures with a lower risk of damage to neural structures and reduced risk of intracranial haemorrhages. Implantation of the ventricular catheter to far into the brain can be monitored and this complication can be overcome. Using the connected pressure monitoring tube an exact measurement of the opening intracranial pressure can be obtained performed without losing CSF.
Analysis of space telescope data collection systems
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1984-01-01
The Multiple Access (MA) communication link of the Space Telescope (ST) is described. An expected performance bit error rate is presented. The historical perspective and rationale behind the ESTL space shuttle end-to-end tests are given. The concatenated coding scheme using a convolutional encoder for the outer coder is developed. The ESTL end-to-end tests on the space shuttle communication link are described. Most important is how a concatenated coding system will perform. This is a go-no-go system with respect to received signal-to-noise ratio. A discussion of the verification requirements and Specification document is presented, and those sections that apply to Space Telescope data and communications system are discussed. The Space Telescope System consists of the Space Telescope Orbiting Observatory (ST), the Space Telescope Science Institute, and the Space Telescope Operation Control Center. The MA system consists of the ST, the return link from the ST via the Tracking and Delay Relay Satellite system to White Sands, and from White Sands via the Domestic Communications Satellite to the STOCC.
Automatic energy calibration algorithm for an RBS setup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala
2013-05-06
This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less
Impact of jammer side information on the performance of anti-jam systems
NASA Astrophysics Data System (ADS)
Lim, Samuel
1992-03-01
The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.
Porter, Joseph J; Mehl, Ryan A
2018-01-01
Posttranslational modifications resulting from oxidation of proteins (Ox-PTMs) are present intracellularly under conditions of oxidative stress as well as basal conditions. In the past, these modifications were thought to be generic protein damage, but it has become increasingly clear that Ox-PTMs can have specific physiological effects. It is an arduous task to distinguish between the two cases, as multiple Ox-PTMs occur simultaneously on the same protein, convoluting analysis. Genetic code expansion (GCE) has emerged as a powerful tool to overcome this challenge as it allows for the site-specific incorporation of an Ox-PTM into translated protein. The resulting homogeneously modified protein products can then be rigorously characterized for the effects of individual Ox-PTMs. We outline the strengths and weaknesses of GCE as they relate to the field of oxidative stress and Ox-PTMs. An overview of the Ox-PTMs that have been genetically encoded and applications of GCE to the study of Ox-PTMs, including antibody validation and therapeutic development, is described.
Song, Shao-jun; Fei, Zhou; Zhang, Xiang
2003-09-01
To compare the difference of intracranial pressure (ICP) in patients with hypertensive intracerebral hemorrhage (HICH) treated with two surgical procedures, traditional craniotomy and puncture drainage. One hundred and twelve cases with HICH were randomly divided into two groups. In one group, 60 patients were operated by traditional craniotomy and in another group, 52 cases by puncture drainage and urokinase treatment. In the meantime, ICP was monitored by placing catheter in lateral ventricle on the contralateral side of the hemorrhage. ICP values were recorded after operation at once, at 24 hours, 72 hours and 1 week. Although all the patients showed increased ICP, the increasing degree in patients treated with traditional craniotomy had lower ICP values (P<0.05 or P<0.01). Traditional craniotomy has advantages over puncture drainage for patients with HICH at least with respect to decreasing ICP.
Lux, Eberhard Albert; Althaus, Astrid
2014-01-01
In this retrospective study, the question was raised and answered whether the rate of postdural puncture headache (PDPH) after continuous spinal anesthesia with a 28G microcatheter varies using a Quincke or a Sprotte needle. The medical records of all patients with allogenic joint replacement of the knee or hip or arthroscopic surgery of the knee joint undergoing continuous spinal anesthesia with a 22G Quincke (n=1,212) or 22G Sprotte needle (n=377) and a 28G microcatheter during the past 6 years were reviewed. We obtained the approval of the ethical committee. The rates of PDPH were statistically not different between both groups: 1.5% of patients developed PDPH after dura puncture with a Quincke needle and 2.1% with a Sprotte needle in women and men.
Lux, Eberhard Albert; Althaus, Astrid
2014-01-01
In this retrospective study, the question was raised and answered whether the rate of postdural puncture headache (PDPH) after continuous spinal anesthesia with a 28G microcatheter varies using a Quincke or a Sprotte needle. The medical records of all patients with allogenic joint replacement of the knee or hip or arthroscopic surgery of the knee joint undergoing continuous spinal anesthesia with a 22G Quincke (n=1,212) or 22G Sprotte needle (n=377) and a 28G microcatheter during the past 6 years were reviewed. We obtained the approval of the ethical committee. The rates of PDPH were statistically not different between both groups: 1.5% of patients developed PDPH after dura puncture with a Quincke needle and 2.1% with a Sprotte needle in women and men. PMID:25419159
Complete prevention of blood loss with self-sealing haemostatic needles
NASA Astrophysics Data System (ADS)
Shin, Mikyung; Park, Sung-Gurl; Oh, Byung-Chang; Kim, Keumyeon; Jo, Seongyeon; Lee, Moon Sue; Oh, Seok Song; Hong, Seon-Hui; Shin, Eui-Cheol; Kim, Ki-Suk; Kang, Sun-Woong; Lee, Haeshin
2017-01-01
Bleeding is largely unavoidable following syringe needle puncture of biological tissues and, while inconvenient, this typically causes little or no harm in healthy individuals. However, there are certain circumstances where syringe injections can have more significant side effects, such as uncontrolled bleeding in those with haemophilia, coagulopathy, or the transmission of infectious diseases through contaminated blood. Herein, we present a haemostatic hypodermic needle able to prevent bleeding following tissue puncture. The surface of the needle is coated with partially crosslinked catechol-functionalized chitosan that undergoes a solid-to-gel phase transition in situ to seal punctured tissues. Testing the capabilities of these haemostatic needles, we report complete prevention of blood loss following intravenous and intramuscular injections in animal models, and 100% survival in haemophiliac mice following syringe puncture of the jugular vein. Such self-sealing haemostatic needles and adhesive coatings may therefore help to prevent complications associated with bleeding in more clinical settings.
An evaluation of the Johnson-Cook model to simulate puncture of 7075 aluminum plates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo; Orient, George Edgar
The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, inmore » contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.« less
NASA Astrophysics Data System (ADS)
Staroń, Waldemar; Herbowski, Leszek; Gurgul, Henryk
2007-04-01
The goal of the work was to determine the values of cumulative parameters of the cerebrospinal fluid. Values of the parameters characterise statistical cerebrospinal fluid obtained by puncture from the patients diagnosed due to suspicion of normotensive hydrocephalus. The cerebrospinal fluid taken by puncture for the routine examinations carried out at the patients suspected of normotensive hydrocephalus was analysed. In the paper there are presented results of examinations of several dozens of puncture samples of the cerebrospinal fluid coming from various patients. Each sample was examined under the microscope and photographed in 20 randomly chosen places. On the basis of analysis of the pictures showing the area of 100 x 100μm, the selected cumulative parameters such as count, numerical density, field area and field perimeter were determined for each sample. Then the average value of the parameters was determined as well.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Wright, Gavin; Harrold, Natalie; Bownes, Peter
2018-01-01
Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896
Surgical navigation in urology: European perspective.
Rassweiler, Jens; Rassweiler, Marie-Claire; Müller, Michael; Kenngott, Hannes; Meinzer, Hans-Peter; Teber, Dogu
2014-01-01
Use of virtual reality to navigate open and endoscopic surgery has significantly evolved during the last decade. Current status of seven most interesting projects inside the European Association of Urology section of uro-technology is summarized with review of literature. Marker-based endoscopic tracking during laparoscopic radical prostatectomy using high-definition technology reduces positive margins. Marker-based endoscopic tracking during laparoscopic partial nephrectomy by mechanical overlay of three-dimensional-segmented virtual anatomy is helpful during planning of trocar placement and dissection of renal hilum. Marker-based, iPAD-assisted puncture of renal collecting system shows more benefit for trainees with reduction of radiation exposure. Three-dimensional laser-assisted puncture of renal collecting system using Uro-Dyna-CT realized in an ex-vivo model enables minimal radiation time. Electromagnetic tracking for puncture of renal collecting system using a sensor at the tip of ureteral catheter worked in an in-vivo model of porcine ureter and kidney. Attitude tracking for ultrasound-guided puncture of renal tumours by accelerometer reduces the puncture error from 4.7 to 1.8 mm. Feasibility of electromagnetic and optical tracking with the da Vinci telemanipulator was shown in vitro as well as using in-vivo model of oesophagectomy. Target registration error was 11.2 mm because of soft-tissue deformation. Intraoperative navigation is helpful during percutaneous puncture collecting system and biopsy of renal tumour using various tracking techniques. Early clinical studies demonstrate advantages of marker-based navigation during laparoscopic radical prostatectomy and partial nephrectomy. Combination of different tracking techniques may further improve this interesting addition to video-assisted surgery.
Emergency cricothyrotomy-a comparative study of different techniques in human cadavers.
Schober, Patrick; Hegemann, Martina C; Schwarte, Lothar A; Loer, Stephan A; Noetges, Peter
2009-02-01
Emergency cricothyrotomy is the final lifesaving option in "cannot intubate-cannot ventilate" situations. Fast, efficient and safe management is indispensable to reestablish oxygenation, thus the quickest, most reliable and safest technique should be used. Several cricothyrotomy techniques exist, which can be grouped into two categories: anatomical-surgical and puncture. We studied success rate, tracheal tube insertion time and complications of different techniques, including a novel cricothyrotomy scissors technique in human cadavers. Sixty-three inexperienced health care providers were randomly assigned to apply either an anatomical-surgical technique (standard surgical technique, n=18; novel cricothyrotomy scissors technique, n=14) or a puncture technique (catheter-over-needle technique, n=17; wire-guided technique, n=14). Airway access was almost always successful with the anatomical-surgical techniques (success rate in standard surgical group 94%, scissors group 100%). In contrast, the success rate was smaller (p<0.05) with the puncture techniques (catheter-over-needle group 82%, wire-guided technique 71%). Tracheal tube insertion time was faster overall (p<0.05) with anatomical-surgical techniques (standard surgical 78s [54-135], novel cricothyrotomy scissors technique 60s [42-82]; median [IQR]) than with puncture techniques (catheter-over-needle technique 74s [48-145], wire-guided technique 135s [116-307]). We observed fewer complications with anatomical-surgical techniques than with puncture techniques (p<0.001). In inexperienced health care personnel, anatomical-surgical techniques showed a higher success rate, a faster tracheal tube insertion time and a lower complication rate compared with puncture techniques, suggesting that they may be the techniques of choice in emergencies.
Simulation and training of lumbar punctures using haptic volume rendering and a 6DOF haptic device
NASA Astrophysics Data System (ADS)
Färber, Matthias; Heller, Julika; Handels, Heinz
2007-03-01
The lumbar puncture is performed by inserting a needle into the spinal chord of the patient to inject medicaments or to extract liquor. The training of this procedure is usually done on the patient guided by experienced supervisors. A virtual reality lumbar puncture simulator has been developed in order to minimize the training costs and the patient's risk. We use a haptic device with six degrees of freedom (6DOF) to feedback forces that resist needle insertion and rotation. An improved haptic volume rendering approach is used to calculate the forces. This approach makes use of label data of relevant structures like skin, bone, muscles or fat and original CT data that contributes information about image structures that can not be segmented. A real-time 3D visualization with optional stereo view shows the punctured region. 2D visualizations of orthogonal slices enable a detailed impression of the anatomical context. The input data consisting of CT and label data and surface models of relevant structures is defined in an XML file together with haptic rendering and visualization parameters. In a first evaluation the visible human male data has been used to generate a virtual training body. Several users with different medical experience tested the lumbar puncture trainer. The simulator gives a good haptic and visual impression of the needle insertion and the haptic volume rendering technique enables the feeling of unsegmented structures. Especially, the restriction of transversal needle movement together with rotation constraints enabled by the 6DOF device facilitate a realistic puncture simulation.
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
NASA Astrophysics Data System (ADS)
Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko
We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU.
Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.
High Performance Implementation of 3D Convolutional Neural Networks on a GPU
Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie
2017-01-01
Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109
nRC: non-coding RNA Classifier based on structural features.
Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso
2017-01-01
Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.
Lin, Chin; Hsu, Chia-Jung; Lou, Yu-Sheng; Yeh, Shih-Jen; Lee, Chia-Cheng; Su, Sui-Lung; Chen, Hsiang-Cheng
2017-11-06
Automated disease code classification using free-text medical information is important for public health surveillance. However, traditional natural language processing (NLP) pipelines are limited, so we propose a method combining word embedding with a convolutional neural network (CNN). Our objective was to compare the performance of traditional pipelines (NLP plus supervised machine learning models) with that of word embedding combined with a CNN in conducting a classification task identifying International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codes in discharge notes. We used 2 classification methods: (1) extracting from discharge notes some features (terms, n-gram phrases, and SNOMED CT categories) that we used to train a set of supervised machine learning models (support vector machine, random forests, and gradient boosting machine), and (2) building a feature matrix, by a pretrained word embedding model, that we used to train a CNN. We used these methods to identify the chapter-level ICD-10-CM diagnosis codes in a set of discharge notes. We conducted the evaluation using 103,390 discharge notes covering patients hospitalized from June 1, 2015 to January 31, 2017 in the Tri-Service General Hospital in Taipei, Taiwan. We used the receiver operating characteristic curve as an evaluation measure, and calculated the area under the curve (AUC) and F-measure as the global measure of effectiveness. In 5-fold cross-validation tests, our method had a higher testing accuracy (mean AUC 0.9696; mean F-measure 0.9086) than traditional NLP-based approaches (mean AUC range 0.8183-0.9571; mean F-measure range 0.5050-0.8739). A real-world simulation that split the training sample and the testing sample by date verified this result (mean AUC 0.9645; mean F-measure 0.9003 using the proposed method). Further analysis showed that the convolutional layers of the CNN effectively identified a large number of keywords and automatically extracted enough concepts to predict the diagnosis codes. Word embedding combined with a CNN showed outstanding performance compared with traditional methods, needing very little data preprocessing. This shows that future studies will not be limited by incomplete dictionaries. A large amount of unstructured information from free-text medical writing will be extracted by automated approaches in the future, and we believe that the health care field is about to enter the age of big data. ©Chin Lin, Chia-Jung Hsu, Yu-Sheng Lou, Shih-Jen Yeh, Chia-Cheng Lee, Sui-Lung Su, Hsiang-Cheng Chen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 06.11.2017.
Information computer program for laser therapy and laser puncture
NASA Astrophysics Data System (ADS)
Badovets, Nadegda N.; Medvedev, Andrei V.
1995-03-01
An informative computer program containing laser therapy and puncture methods has been developed. It was used successfully in connection with the compact Russian medical laser apparatus HELIOS-O1M in laser treatment and the education process.
Convoluted nozzle design for the RL10 derivative 2B engine
NASA Technical Reports Server (NTRS)
1985-01-01
The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.
Sim, K S; Teh, V; Tey, Y C; Kho, T K
2016-11-01
This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Fox, R G; Reiche, W; Kiefer, M; Hagen, T; Huber, G
1996-11-01
Myelography in combination with a postmyelography CT is an important presurgical examination because of its excellent visualisation of the disc, the bone and the contrast-filled dura. Side effects after myelography can be observed in up to 50% of patients. The pathophysiological mechanism is thought to be increased cerebrospinal fluid leakage at the puncture site. Since the introduction by Sprotte in 1979 of the pencil-point needle, a modification of Whitacre's needle, fewer complaints after lumbar puncture have been reported. The aim of the study was to examine the influence of two types of needle points and the temperature (37 degrees C vs 21 degrees C) of the contrast medium (CM; iotrolan, Isovist) on the incidence of side effects of lumbar puncture for myelography. In a prospective randomized trial the incidence of complaints after lumbar puncture with intrathecal CM application was evaluated by the use of a 21-G pencil-point needle as modified by Sprotte compared to our usual 22-G needle with a Quincke bevel. Some 412 patients (201 female, 211 male; mean age 54.05 +/- 7.4 years) were investigated. Directly after examination and 1. 3 and 5 days later the patients were questioned about complaints (headache, neck stiffness nausea, vomiting, buzzing in the ear and dizziness). The results were tested by the chi square test. A significantly lower incidence of complaints was seen after lumbar puncture with the pencil-point needle/Quincke needle (headache: 6.3%/18.9%, P < 0.0001; headache lasting 3 days: 0.5%/7.8%, P < 0.0001; headache lasting 5 days: 0%/2.4%, P = 0.0305; nausea: 0%/4.9%, P = 0.0009; vomiting: 0%/3.4%, P = 0.0009; dizziness: 0%/3.4%, P = 0.0074; neck stiffness: 0%/3.4%, P = 0.0074). The temperature of the CM had no influence on the complaints. No influence was seen on the quality of the myelogram. No relation to sex and age was found. Complaints after lumbar puncture and myelography are caused by the cerebrospinal fluid leakage at the puncture site. The incidence of side effects related to this leakage can be reduced by using a pencil-point needle. The temperature of the CM has no influence on the complaints.
Epidural blood patching for preventing and treating post-dural puncture headache.
Sudlow, C; Warlow, C
2002-01-01
Dural puncture is a common procedure, but leakage of CSF from the resulting dural defect may cause postural headache after the procedure, and this can be disabling. Injecting an epidural blood patch around the site of the defect may stop this leakage, and so may have a role in preventing or treating post dural puncture headache. To assess the possible benefits and harms of epidural blood patching in both the prevention and the treatment of post-dural puncture headache. We searched the Cochrane Controlled Trials Register (Cochrane Library, Issue 4, 2000), MEDLINE (January 1994 to December 1998), and EMBASE (January 1980 to December 1998). We also searched the reference lists of relevant articles identified electronically, and asked both the authors of all included trials and colleagues with an interest in this area to let us know of any other potentially relevant studies not already identified. Date of last search: December 2000. We sought all properly randomised, unconfounded trials that compared epidural blood patch versus no epidural blood patch in the prevention or treatment of post-dural puncture headache among all types of patients undergoing dural puncture for any reason. The primary outcome of effectiveness was postural headache. One reviewer extracted details of trial methodology and outcome data from the reports of all trials considered eligible for inclusion. We invited the authors of all such trials both to check the information extracted and to provide any details that were unavailable in the published reports. Intention-to-treat analyses were performed using the Peto O-E method. Information about adverse effects (post-dural puncture backache, epidural infection and lower limb paraesthesia) was also extracted. Three trials (77 patients) were eligible for inclusion. Methodological details were generally incomplete. Although the results of our analyses suggested that both prophylactic and therapeutic epidural blood patching may be of benefit, the very small numbers of patients and outcome events, as well as uncertainties about trial methodology, precluded reliable assessments of the potential benefits and harms of this intervention. Further, adequately powered, randomised trials (including at least a few hundred patients) are required before reliable conclusions can be drawn about the role of epidural blood patching in the prevention and treatment of post-dural puncture headache.
A proposed technique for the Venus balloon telemetry and Doppler frequency recovery
NASA Technical Reports Server (NTRS)
Jurgens, R. F.; Divsalar, D.
1985-01-01
A technique is proposed to accurately estimate the Doppler frequency and demodulate the digitally encoded telemetry signal that contains the measurements from balloon instruments. Since the data are prerecorded, one can take advantage of noncausal estimators that are both simpler and more computationally efficient than the usual closed-loop or real-time estimators for signal detection and carrier tracking. Algorithms for carrier frequency estimation subcarrier demodulation, bit and frame synchronization are described. A Viterbi decoder algorithm using a branch indexing technique has been devised to decode constraint length 6, rate 1/2 convolutional code that is being used by the balloon transmitter. These algorithms are memory efficient and can be implemented on microcomputer systems.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea
2010-04-01
The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.
Kim, Ah-Young; Choi, Myoung Su
2015-05-14
Canine fossa puncture (CFP) combined with endoscopic sinus surgery is a simple and effective method for treating antrochoanal polyps, particularly those that originate in the anterior, inferior or medial aspect of the antrum. Several complications can occur following CFP, including facial paraesthesia and dental numbness. However, facial palsy is extremely rare after CFP. We postulated that a possible mechanism of facial palsy is pressure injury to the soft tissues adjacent to the puncture site, which can damage the buccal branch of the facial nerve during CFP. 2015 BMJ Publishing Group Ltd.
Huynh, Thien J; Morton, Ryan P; Levitt, Michael R; Ghodke, Basavaraj V; Wink, Onno; Hallam, Danial K
2017-08-18
We report successful transvenous treatment of direct carotid-cavernous fistula in a patient with Ehlers-Danlos syndrome type IV using a novel triple-overlay embolization (TAILOREd) technique without the need for arterial puncture, which is known to be highly risky in this patient group. The TAILOREd technique allowed for successful treatment using preoperative MR angiography as a three-dimensional overlay roadmap combined with cone beam CT and live fluoroscopy, precluding the need for an arterial puncture. 2017 BMJ Publishing Group Ltd.
Tarkkila, P; Huhtala, J; Salminen, U
1994-08-01
The effect of different size (25-, 27- and 29-gauge) Quincke-type spinal needles on the incidence of insertion difficulties and failure rates was investigated in a randomised, prospective study with 300 patients. The needle size was randomised but the insertion procedure was standardised. The time to achieve dural puncture was significantly longer with the 29-gauge spinal needle compared with the larger bore needles and was due to the greater flexibility of the thin needle. However, the difference was less than 1 min and cannot be considered clinically significant. There were no significant differences between groups in the number of insertion attempts or failures and the same sensory level of analgesia was reached with all the needle sizes studied. Postoperatively, no postdural puncture headaches occurred in the 29-gauge spinal needle group, whilst in the 25- and 27-gauge needle groups, the postdural puncture headache rates were 7.4% and 2.1% respectively. The incidence of backache was similar in all study groups. We conclude that dural puncture with a 29-gauge spinal needle is clinically as easy as with larger bore needles and its use is indicated in patients who have a high risk of postdural puncture headache.
Effect of system compliance on crack nucleation in soft materials
NASA Astrophysics Data System (ADS)
Rattan, Shruti; Crosby, Alfred
Puncture mechanics in soft materials is critical for the development of new surgical instruments, robot assisted-surgery as well as new materials used in personal protective equipment. However, analytical techniques to study this important deformation process are limited. We have previously described a simple experimental method to study the resistive forces and failure of a soft gel being indented with a small tip needle. We showed that puncture stresses can reach two orders of magnitude greater than the material modulus and that the force response is insensitive to the geometry of the indenter at large indentation depths. Currently, we are examining the influence of system compliance on crack nucleation (e.g. puncture) in soft gels. It is well known that system compliance influences the peak force in adhesion and traditional fracture experiments; however, its influence on crack nucleation is unresolved. We find that as the system becomes more compliant, lower peak forces required to puncture a gel of certain stiffness with the same indenter were measured. We are developing scaling relationships to relate the peak puncture force and system compliance. Our findings introduce new questions with regard to the possibility of intrinsic materials properties related to the critical stress and energy for crack nucleation in soft materials.
Rose, D. V.; Madrid, E. A.; Welch, D. R.; ...
2015-03-04
Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn
2017-06-21
The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.