Sample records for compressed blind de-convolution

  1. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    NASA Astrophysics Data System (ADS)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  2. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  3. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding

  4. Compression of deep convolutional neural network for computer-aided diagnosis of masses in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-02-01

    Deep-learning models are highly parameterized, causing difficulty in inference and transfer learning. We propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in DBT while maintaining the classification accuracy. Two-stage transfer learning was used to adapt the ImageNet-trained DCNN to mammography and then to DBT. In the first-stage transfer learning, transfer learning from ImageNet trained DCNN was performed using mammography data. In the second-stage transfer learning, the mammography-trained DCNN was trained on the DBT data using feature extraction from fully connected layer, recursive feature elimination and random forest classification. The layered pathway evolution encapsulates the feature extraction to the classification stages to compress the DCNN. Genetic algorithm was used in an iterative approach with tournament selection driven by count-preserving crossover and mutation to identify the necessary nodes in each convolution layer while eliminating the redundant nodes. The DCNN was reduced by 99% in the number of parameters and 95% in mathematical operations in the convolutional layers. The lesion-based area under the receiver operating characteristic curve on an independent DBT test set from the original and the compressed network resulted in 0.88+/-0.05 and 0.90+/-0.04, respectively. The difference did not reach statistical significance. We demonstrated a DCNN compression approach without additional fine-tuning or loss of performance for classification of masses in DBT. The approach can be extended to other DCNNs and transfer learning tasks. An ensemble of these smaller and focused DCNNs has the potential to be used in multi-target transfer learning.

  5. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  6. Rolling bearing fault feature learning using improved convolutional deep belief network with compressed sensing

    NASA Astrophysics Data System (ADS)

    Shao, Haidong; Jiang, Hongkai; Zhang, Haizhou; Duan, Wenjing; Liang, Tianchen; Wu, Shuaipeng

    2018-02-01

    The vibration signals collected from rolling bearing are usually complex and non-stationary with heavy background noise. Therefore, it is a great challenge to efficiently learn the representative fault features of the collected vibration signals. In this paper, a novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing. Firstly, CS is adopted for reducing the vibration data amount to improve analysis efficiency. Secondly, a new CDBN model is constructed with Gaussian visible units to enhance the feature learning ability for the compressed data. Finally, exponential moving average (EMA) technique is employed to improve the generalization performance of the constructed deep model. The developed method is applied to analyze the experimental rolling bearing vibration signals. The results confirm that the developed method is more effective than the traditional methods.

  7. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  8. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  9. Radio astronomy Explorer B antenna aspect processor

    NASA Technical Reports Server (NTRS)

    Miller, W. H.; Novello, J.; Reeves, C. C.

    1972-01-01

    The antenna aspect system used on the Radio Astronomy Explorer B spacecraft is described. This system consists of two facsimile cameras, a data encoder, and a data processor. Emphasis is placed on the discussion of the data processor, which contains a data compressor and a source encoder. With this compression scheme a compression ratio of 8 is achieved on a typical line of camera data. These compressed data are then convolutionally encoded.

  10. Coherent diffraction imaging of nanoscale strain evolution in a single crystal under high pressure

    PubMed Central

    Yang, Wenge; Huang, Xiaojing; Harder, Ross; Clark, Jesse N.; Robinson, Ian K.; Mao, Ho-kwang

    2013-01-01

    The evolution of morphology and internal strain under high pressure fundamentally alters the physical property, structural stability, phase transition and deformation mechanism of materials. Until now, only averaged strain distributions have been studied. Bragg coherent X-ray diffraction imaging is highly sensitive to the internal strain distribution of individual crystals but requires coherent illumination, which can be compromised by the complex high-pressure sample environment. Here we report the successful de-convolution of these effects with the recently developed mutual coherent function method to reveal the three-dimensional strain distribution inside a 400 nm gold single crystal during compression within a diamond-anvil cell. The three-dimensional morphology and evolution of the strain under pressures up to 6.4 GPa were obtained with better than 30 nm spatial resolution. In addition to providing a new approach for high-pressure nanotechnology and rheology studies, we draw fundamental conclusions about the origin of the anomalous compressibility of nanocrystals. PMID:23575684

  11. Coherent diffraction imaging of nanoscale strain evolution in a single crystal under high pressure.

    PubMed

    Yang, Wenge; Huang, Xiaojing; Harder, Ross; Clark, Jesse N; Robinson, Ian K; Mao, Ho-kwang

    2013-01-01

    The evolution of morphology and internal strain under high pressure fundamentally alters the physical property, structural stability, phase transition and deformation mechanism of materials. Until now, only averaged strain distributions have been studied. Bragg coherent X-ray diffraction imaging is highly sensitive to the internal strain distribution of individual crystals but requires coherent illumination, which can be compromised by the complex high-pressure sample environment. Here we report the successful de-convolution of these effects with the recently developed mutual coherent function method to reveal the three-dimensional strain distribution inside a 400 nm gold single crystal during compression within a diamond-anvil cell. The three-dimensional morphology and evolution of the strain under pressures up to 6.4 GPa were obtained with better than 30 nm spatial resolution. In addition to providing a new approach for high-pressure nanotechnology and rheology studies, we draw fundamental conclusions about the origin of the anomalous compressibility of nanocrystals.

  12. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  13. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  15. The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui

    1996-02-01

    Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.

  16. Non-stationary blind deconvolution of medical ultrasound scans

    NASA Astrophysics Data System (ADS)

    Michailovich, Oleg V.

    2017-03-01

    In linear approximation, the formation of a radio-frequency (RF) ultrasound image can be described based on a standard convolution model in which the image is obtained as a result of convolution of the point spread function (PSF) of the ultrasound scanner in use with a tissue reflectivity function (TRF). Due to the band-limited nature of the PSF, the RF images can only be acquired at a finite spatial resolution, which is often insufficient for proper representation of the diagnostic information contained in the TRF. One particular way to alleviate this problem is by means of image deconvolution, which is usually performed in a "blind" mode, when both PSF and TRF are estimated at the same time. Despite its proven effectiveness, blind deconvolution (BD) still suffers from a number of drawbacks, chief among which stems from its dependence on a stationary convolution model, which is incapable of accounting for the spatial variability of the PSF. As a result, virtually all existing BD algorithms are applied to localized segments of RF images. In this work, we introduce a novel method for non-stationary BD, which is capable of recovering the TRF concurrently with the spatially variable PSF. Particularly, our approach is based on semigroup theory which allows one to describe the effect of such a PSF in terms of the action of a properly defined linear semigroup. The approach leads to a tractable optimization problem, which can be solved using standard numerical methods. The effectiveness of the proposed solution is supported by experiments with in vivo ultrasound data.

  17. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  18. Combining morphometric features and convolutional networks fusion for glaucoma diagnosis

    NASA Astrophysics Data System (ADS)

    Perdomo, Oscar; Arevalo, John; González, Fabio A.

    2017-11-01

    Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.

  19. Blind Compressed Image Watermarking for Noisy Communication Channels

    DTIC Science & Technology

    2015-10-26

    Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for

  20. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  1. Compression fractures detection on CT

    NASA Astrophysics Data System (ADS)

    Bar, Amir; Wolf, Lior; Bergman Amitai, Orna; Toledano, Eyal; Elnekave, Eldad

    2017-03-01

    The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compression fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.

  2. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  3. Advanced imaging communication system

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Rice, R. F.

    1977-01-01

    Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.

  4. Semi-blind sparse image reconstruction with application to MRFM.

    PubMed

    Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O

    2012-09-01

    We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.

  5. The effects of kinesio taping on the color intensity of superficial skin hematomas: A pilot study.

    PubMed

    Vercelli, Stefano; Colombo, Claudio; Tolosa, Francesca; Moriondo, Andrea; Bravini, Elisabetta; Ferriero, Giorgio; Francesco, Sartorio

    2017-01-01

    To analyze the effects of kinesio taping (KT) -applied with three different strains that induced or not the formation of skin creases (called convolutions)- on color intensity of post-surgical superficial hematomas. Single-blind paired study. Rehabilitation clinic. A convenience sample of 13 inpatients with post-surgical superficial hematomas. The tape was applied for 24 consecutive hours. Three tails of KT were randomly applied with different degrees of strain: none (SN); light (SL); and full longitudinal stretch (SF). We expected to obtain correct formation of convolutions with SL, some convolutions with SN, and no convolutions with SF. The change in color intensity of hematomas, measured by means of polar coordinates CIE L*a*b* using a validated and standardized digital images system. Applying KT to hematomas did not significantly change the color intensity in the central area under the tape (p > 0.05). There was a significant treatment effect (p < 0.05) under the edges of the tape, independently of the formation of convolutions (p > 0.05). The changes observed along the edges of the tape could be related to the formation of a pressure gradient between the KT and the adjacent area, but were not dependent on the formation of skin convolutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Fast reversible wavelet image compressor

    NASA Astrophysics Data System (ADS)

    Kim, HyungJun; Li, Ching-Chung

    1996-10-01

    We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.

  7. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  8. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  9. Accelerated Cartesian expansion (ACE) based framework for the rapid evaluation of diffusion, lossy wave, and Klein-Gordon potentials

    DOE PAGES

    Baczewski, Andrew David; Vikram, Melapudi; Shanker, Balasubramaniam; ...

    2010-08-27

    Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(N s 2N t 2), where N s and N t are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The firstmore » scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(N sN tlog 2N t). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.« less

  10. Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks.

    PubMed

    Lakhani, Paras; Sundaram, Baskaran

    2017-08-01

    Purpose To evaluate the efficacy of deep convolutional neural networks (DCNNs) for detecting tuberculosis (TB) on chest radiographs. Materials and Methods Four deidentified HIPAA-compliant datasets were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs. The datasets were split into training (68.0%), validation (17.1%), and test (14.9%). Two different DCNNs, AlexNet and GoogLeNet, were used to classify the images as having manifestations of pulmonary TB or as healthy. Both untrained and pretrained networks on ImageNet were used, and augmentation with multiple preprocessing techniques. Ensembles were performed on the best-performing algorithms. For cases where the classifiers were in disagreement, an independent board-certified cardiothoracic radiologist blindly interpreted the images to evaluate a potential radiologist-augmented workflow. Receiver operating characteristic curves and areas under the curve (AUCs) were used to assess model performance by using the DeLong method for statistical comparison of receiver operating characteristic curves. Results The best-performing classifier had an AUC of 0.99, which was an ensemble of the AlexNet and GoogLeNet DCNNs. The AUCs of the pretrained models were greater than that of the untrained models (P < .001). Augmenting the dataset further increased accuracy (P values for AlexNet and GoogLeNet were .03 and .02, respectively). The DCNNs had disagreement in 13 of the 150 test cases, which were blindly reviewed by a cardiothoracic radiologist, who correctly interpreted all 13 cases (100%). This radiologist-augmented approach resulted in a sensitivity of 97.3% and specificity 100%. Conclusion Deep learning with DCNNs can accurately classify TB at chest radiography with an AUC of 0.99. A radiologist-augmented approach for cases where there was disagreement among the classifiers further improved accuracy. © RSNA, 2017.

  11. Effects of Amplitude Compression on Relative Auditory Distance Perception

    DTIC Science & Technology

    2013-10-01

    FFT analyses are shown in Figure 4. The use of convolution of the stimuli with the binaural impulse responses recorded from KEMAR resulted in the...human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural

  12. On the growth and form of cortical convolutions

    NASA Astrophysics Data System (ADS)

    Tallinen, Tuomas; Chung, Jun Young; Rousseau, François; Girard, Nadine; Lefèvre, Julien; Mahadevan, L.

    2016-06-01

    The rapid growth of the human cortex during development is accompanied by the folding of the brain into a highly convoluted structure. Recent studies have focused on the genetic and cellular regulation of cortical growth, but understanding the formation of the gyral and sulcal convolutions also requires consideration of the geometry and physical shaping of the growing brain. To study this, we use magnetic resonance images to build a 3D-printed layered gel mimic of the developing smooth fetal brain; when immersed in a solvent, the outer layer swells relative to the core, mimicking cortical growth. This relative growth puts the outer layer into mechanical compression and leads to sulci and gyri similar to those in fetal brains. Starting with the same initial geometry, we also build numerical simulations of the brain modelled as a soft tissue with a growing cortex, and show that this also produces the characteristic patterns of convolutions over a realistic developmental course. All together, our results show that although many molecular determinants control the tangential expansion of the cortex, the size, shape, placement and orientation of the folds arise through iterations and variations of an elementary mechanical instability modulated by early fetal brain geometry.

  13. A Double Dwell High Sensitivity GPS Acquisition Scheme Using Binarized Convolution Neural Network

    PubMed Central

    Wang, Zhen; Zhuang, Yuan; Yang, Jun; Zhang, Hengfeng; Dong, Wei; Wang, Min; Hua, Luchi; Liu, Bo; Shi, Longxing

    2018-01-01

    Conventional GPS acquisition methods, such as Max selection and threshold crossing (MAX/TC), estimate GPS code/Doppler by its correlation peak. Different from MAX/TC, a multi-layer binarized convolution neural network (BCNN) is proposed to recognize the GPS acquisition correlation envelope in this article. The proposed method is a double dwell acquisition in which a short integration is adopted in the first dwell and a long integration is applied in the second one. To reduce the search space for parameters, BCNN detects the possible envelope which contains the auto-correlation peak in the first dwell to compress the initial search space to 1/1023. Although there is a long integration in the second dwell, the acquisition computation overhead is still low due to the compressed search space. Comprehensively, the total computation overhead of the proposed method is only 1/5 of conventional ones. Experiments show that the proposed double dwell/correlation envelope identification (DD/CEI) neural network achieves 2 dB improvement when compared with the MAX/TC under the same specification. PMID:29747373

  14. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  15. Ground-based Spectroscopy Of Extrasolar Planets

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2011-09-01

    In recent years, spectroscopy of exoplanetary atmospheres has proven to be very successful. When in the past discoveries were made using space-born observatories such as Hubble and Spitzer, the observational focus continues to shift to ground-based facilities. This is especially true since the end of the Spitzer cold-phase, depleting us of a space-borne eye in the infrared. With projects like E-ELT and TMT on the horizon, this trend will only intensify. So far several observational strategies have been employed for ground-based spectroscopy. All of which are trying to solve the problems incurred by high systematic and telluric noise and are distinct in their advantages and dis-advantages. Using time-resolved spectroscopy, we obtain an individual lightcurve per spectral channel of the instrument. The benefits of such an approach are multifold since it allows us to utilize a broad spectrum of statistical methods. Using new IRTF data, in the K and L-bands, we will illustrate the intricacies of two spectral retrieval approaches: 1) the self-filtering and signal amplification achieved by consecutive convolutions in the frequency domain, 2) the blind de-convolution of signal from noise using non-parametric machine learning algorithms. These novel techniques allow us to present new results on the hot-Jupiter HD189733b, showing strong methane emissions in both, K and L-bands at spectral resolutions of R 170. Using data from the IRTF/SpeX instrument, we will discuss the implications and possible theoretical models of strong methane emissions on this planet.

  16. Nonparametric Representations for Integrated Inference, Control, and Sensing

    DTIC Science & Technology

    2015-10-01

    Learning (ICML), 2013. [20] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep ...unlimited. Multi-layer feature learning “SuperVision” Convolutional Neural Network (CNN) ImageNet Classification with Deep Convolutional Neural Networks...to develop a new framework for autonomous operations that will extend the state of the art in distributed learning and modeling from data, and

  17. A systematic FPGA acceleration design for applications based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao

    2018-04-01

    Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.

  18. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis.

    PubMed

    Samala, Ravi K; Chan, Heang-Ping; Hadjiiski, Lubomir M; Helvie, Mark A; Richter, Caleb; Cha, Kenny

    2018-05-01

    Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p  >  0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.

  19. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-05-01

    Deep learning models are highly parameterized, resulting in difficulty in inference and transfer learning for image recognition tasks. In this work, we propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in digital breast tomosynthesis (DBT). The objective is to prune the number of tunable parameters while preserving the classification accuracy. In the first stage transfer learning, 19 632 augmented regions-of-interest (ROIs) from 2454 mass lesions on mammograms were used to train a pre-trained DCNN on ImageNet. In the second stage transfer learning, the DCNN was used as a feature extractor followed by feature selection and random forest classification. The pathway evolution was performed using genetic algorithm in an iterative approach with tournament selection driven by count-preserving crossover and mutation. The second stage was trained with 9120 DBT ROIs from 228 mass lesions using leave-one-case-out cross-validation. The DCNN was reduced by 87% in the number of neurons, 34% in the number of parameters, and 95% in the number of multiply-and-add operations required in the convolutional layers. The test AUC on 89 mass lesions from 94 independent DBT cases before and after pruning were 0.88 and 0.90, respectively, and the difference was not statistically significant (p  >  0.05). The proposed DCNN compression approach can reduce the number of required operations by 95% while maintaining the classification performance. The approach can be extended to other deep neural networks and imaging tasks where transfer learning is appropriate.

  20. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    PubMed

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  1. Frame prediction using recurrent convolutional encoder with residual learning

    NASA Astrophysics Data System (ADS)

    Yue, Boxuan; Liang, Jun

    2018-05-01

    The prediction for the frame of a video is difficult but in urgent need in auto-driving. Conventional methods can only predict some abstract trends of the region of interest. The boom of deep learning makes the prediction for frames possible. In this paper, we propose a novel recurrent convolutional encoder and DE convolutional decoder structure to predict frames. We introduce the residual learning in the convolution encoder structure to solve the gradient issues. The residual learning can transform the gradient back propagation to an identity mapping. It can reserve the whole gradient information and overcome the gradient issues in Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN). Besides, compared with the branches in CNNs and the gated structures in RNNs, the residual learning can save the training time significantly. In the experiments, we use UCF101 dataset to train our networks, the predictions are compared with some state-of-the-art methods. The results show that our networks can predict frames fast and efficiently. Furthermore, our networks are used for the driving video to verify the practicability.

  2. A randomised control trial of prompt and feedback devices and their impact on quality of chest compressions--a simulation study.

    PubMed

    Yeung, Joyce; Davies, Robin; Gao, Fang; Perkins, Gavin D

    2014-04-01

    This study aims to compare the effect of three CPR prompt and feedback devices on quality of chest compressions amongst healthcare providers. A single blinded, randomised controlled trial compared a pressure sensor/metronome device (CPREzy), an accelerometer device (Phillips Q-CPR) and simple metronome on the quality of chest compressions on a manikin by trained rescuers. The primary outcome was compression depth. Secondary outcomes were compression rate, proportion of chest compressions with inadequate depth, incomplete release and user satisfaction. The pressure sensor device improved compression depth (37.24-43.64 mm, p=0.02), the accelerometer device decreased chest compression depth (37.38-33.19 mm, p=0.04) whilst the metronome had no effect (39.88 mm vs. 40.64 mm, p=0.802). Compression rate fell with all devices (pressure sensor device 114.68-98.84 min(-1), p=0.001, accelerometer 112.04-102.92 min(-1), p=0.072 and metronome 108.24 min(-1) vs. 99.36 min(-1), p=0.009). The pressure sensor feedback device reduced the proportion of compressions with inadequate depth (0.52 vs. 0.24, p=0.013) whilst the accelerometer device and metronome did not have a statistically significant effect. Incomplete release of compressions was common, but unaffected by the CPR feedback devices. Users preferred the accelerometer and metronome devices over the pressure sensor device. A post hoc study showed that de-activating the voice prompt on the accelerometer device prevented the deterioration in compression quality seen in the main study. CPR feedback devices vary in their ability to improve performance. In this study the pressure sensor device improved compression depth, whilst the accelerometer device reduced it and metronome had no effect. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  4. Recognition of Rapid Speech by Blind and Sighted Older Adults

    ERIC Educational Resources Information Center

    Gordon-Salant, Sandra; Friedman, Sarah A.

    2011-01-01

    Purpose: To determine whether older blind participants recognize time-compressed speech better than older sighted participants. Method: Three groups of adults with normal hearing participated (n = 10/group): (a) older sighted, (b) older blind, and (c) younger sighted listeners. Low-predictability sentences that were uncompressed (0% time…

  5. Three-dimensional FLASH Laser Radar Range Estimation via Blind Deconvolution

    DTIC Science & Technology

    2009-10-01

    scene can result in errors due to several factors including the optical spatial impulse response, detector blurring, photon noise , timing jitter, and...estimation error include spatial blur, detector blurring, noise , timing jitter, and inter-sample targets. Unlike previous research, this paper ac- counts...for pixel coupling by defining the range image mathematical model as a 2D convolution between the system spatial impulse response and the object (target

  6. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    PubMed

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. A method of vehicle license plate recognition based on PCANet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  8. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  9. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  10. Automatic diabetic retinopathy classification

    NASA Astrophysics Data System (ADS)

    Bravo, María. A.; Arbeláez, Pablo A.

    2017-11-01

    Diabetic retinopathy (DR) is a disease in which the retina is damaged due to augmentation in the blood pressure of small vessels. DR is the major cause of blindness for diabetics. It has been shown that early diagnosis can play a major role in prevention of visual loss and blindness. This work proposes a computer based approach for the detection of DR in back-of-the-eye images based on the use of convolutional neural networks (CNNs). Our CNN uses deep architectures to classify Back-of-the-eye Retinal Photographs (BRP) in 5 stages of DR. Our method combines several preprocessing images of BRP to obtain an ACA score of 50.5%. Furthermore, we explore subproblems by training a larger CNN of our main classification task.

  11. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  12. Towards Natural Transition in Compressible Boundary Layers

    DTIC Science & Technology

    2016-06-29

    AFRL-AFOSR-CL-TR-2016-0011 Towards natural transition in compressible boundary layers Marcello Faraco de Medeiros FUNDACAO PARA O INCREMENTO DA...to 29-03-2016 Towards natural transition in compressible boundary layers FA9550-11-1-0354 Marcello A. Faraco de Medeiros Germán Andrés Gaviria...unlimited. 109 Final report Towards natural transition in compressible boundary layers Principal Investigator: Marcello Augusto Faraco de Medeiros

  13. A no-reference image and video visual quality metric based on machine learning

    NASA Astrophysics Data System (ADS)

    Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy

    2018-04-01

    The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.

  14. Two-dimensional simulations of thermonuclear burn in ignition-scale inertial confinement fusion targets under compressed axial magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, L. J.; Logan, B. G.; Zimmerman, G. B.

    2013-07-15

    We report for the first time on full 2-D radiation-hydrodynamic implosion simulations that explore the impact of highly compressed imposed magnetic fields on the ignition and burn of perturbed spherical implosions of ignition-scale cryogenic capsules. Using perturbations that highly convolute the cold fuel boundary of the hotspot and prevent ignition without applied fields, we impose initial axial seed fields of 20–100 T (potentially attainable using present experimental methods) that compress to greater than 4 × 10{sup 4} T (400 MG) under implosion, thereby relaxing hotspot areal densities and pressures required for ignition and propagating burn by ∼50%. The compressed fieldmore » is high enough to suppress transverse electron heat conduction, and to allow alphas to couple energy into the hotspot even when highly deformed by large low-mode amplitudes. This might permit the recovery of ignition, or at least significant alpha particle heating, in submarginal capsules that would otherwise fail because of adverse hydrodynamic instabilities.« less

  15. Sharpening spots: correcting for bleedover in cDNA array images.

    PubMed

    Therneau, Terry; Tschumper, Renee C; Jelinek, Diane

    2002-03-01

    For cDNA array methods that depend on imaging of a radiolabel, we show that bleedover of one spot onto another, due to the gap between the array and the imaging media, can be a major problem. The images can be sharpened, however, using a blind convolution method based on the EM algorithm. The sharpened images look like a set of donuts, which concurs with our knowledge of the spotting process. Oversharpened images are actually useful as well, in locating the centers of each spot.

  16. Evaluation of chest compression effect on airway management with air-Q, aura-i, i-gel, and Fastrack intubating supraglottic devices by novice physicians: a randomized crossover simulation study.

    PubMed

    Komasawa, Nobuyasu; Ueki, Ryusuke; Kaminoh, Yoshiroh; Nishi, Shin-Ichi

    2014-10-01

    In the 2010 American Heart Association guidelines, supraglottic devices (SGDs) such as the laryngeal mask are proposed as alternatives to tracheal intubation for cardiopulmonary resuscitation. Some SGDs can also serve as a means for tracheal intubation after successful ventilation. The purpose of this study was to evaluate the effect of chest compression on airway management with four intubating SGDs, aura-i (aura-i), air-Q (air-Q), i-gel (i-gel), and Fastrack (Fastrack), during cardiopulmonary resuscitation using a manikin. Twenty novice physicians inserted the four intubating SGDs into a manikin with or without chest compression. Insertion time and successful ventilation rate were measured. For cases of successful ventilation, blind tracheal intubation via the intubating SGD was performed with chest compression and success or failure within 30 s was recorded. Chest compression did not decrease the ventilation success rate of the four intubating SGDs (without chest compression (success/total): air-Q, 19/20; aura-i, 19/20; i-gel, 18/20; Fastrack, 19/20; with chest compression: air-Q, 19/20; aura-i, 19/20; i-gel, 16/20; Fastrack, 18/20). Insertion time was significantly lengthened by chest compression in the i-gel trial (P < 0.05), but not with the other three devices. The blind intubation success rate with chest compression was the highest in the air-Q trial (air-Q, 15/19; aura-i, 14/19; i-gel, 12/16; Fastrack, 10/18). This simulation study revealed the utility of intubating SGDs for airway management during chest compression.

  17. A spherical harmonic approach for the determination of HCP texture from ultrasound: A solution to the inverse problem

    NASA Astrophysics Data System (ADS)

    Lan, Bo; Lowe, Michael J. S.; Dunne, Fionn P. E.

    2015-10-01

    A new spherical convolution approach has been presented which couples HCP single crystal wave speed (the kernel function) with polycrystal c-axis pole distribution function to give the resultant polycrystal wave speed response. The three functions have been expressed as spherical harmonic expansions thus enabling application of the de-convolution technique to enable any one of the three to be determined from knowledge of the other two. Hence, the forward problem of determination of polycrystal wave speed from knowledge of single crystal wave speed response and the polycrystal pole distribution has been solved for a broad range of experimentally representative HCP polycrystal textures. The technique provides near-perfect representation of the sensitivity of wave speed to polycrystal texture as well as quantitative prediction of polycrystal wave speed. More importantly, a solution to the inverse problem is presented in which texture, as a c-axis distribution function, is determined from knowledge of the kernel function and the polycrystal wave speed response. It has also been explained why it has been widely reported in the literature that only texture coefficients up to 4th degree may be obtained from ultrasonic measurements. Finally, the de-convolution approach presented provides the potential for the measurement of polycrystal texture from ultrasonic wave speed measurements.

  18. THE COMPREHENSION OF RAPID SPEECH BY THE BLIND, PART III.

    ERIC Educational Resources Information Center

    FOULKE, EMERSON

    A REVIEW OF THE RESEARCH ON THE COMPREHENSION OF RAPID SPEECH BY THE BLIND IDENTIFIES FIVE METHODS OF SPEECH COMPRESSION--SPEECH CHANGING, ELECTROMECHANICAL SAMPLING, COMPUTER SAMPLING, SPEECH SYNTHESIS, AND FREQUENCY DIVIDING WITH THE HARMONIC COMPRESSOR. THE SPEECH CHANGING AND ELECTROMECHANICAL SAMPLING METHODS AND THE NECESSARY APPARATUS HAVE…

  19. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  20. A Compressed Sensing Based Ultra-Wideband Communication System

    DTIC Science & Technology

    2009-06-01

    principle, most of the processing at the receiver can be moved to the transmitter—where energy consumption and computation are sufficient for many advanced...extended to continuous time signals. We use ∗ to denote the convolution process in a linear time-invariant (LTI) system. Assume that there is an analog...Filter Channel Low Rate A/D Processing Sparse Bit Sequence UWB Pulse Generator α̂ Waves)(RadioGHz 5 MHz125 θ Ψ Φ y θ̂ 1 ˆ arg min s.t. yθ

  1. Application of the SeDeM Diagram and a new mathematical equation in the design of direct compression tablet formulation.

    PubMed

    Suñé-Negre, Josep M; Pérez-Lozano, Pilar; Miñarro, Montserrat; Roig, Manel; Fuster, Roser; Hernández, Carmen; Ruhí, Ramon; García-Montoya, Encarna; Ticó, Josep R

    2008-08-01

    Application of the new SeDeM Method is proposed for the study of the galenic properties of excipients in terms of the applicability of direct-compression technology. Through experimental studies of the parameters of the SeDeM Method and their subsequent mathematical treatment and graphical expression (SeDeM Diagram), six different DC diluents were analysed to determine whether they were suitable for direct compression (DC). Based on the properties of these diluents, a mathematical equation was established to identify the best DC diluent and the optimum amount to be used when defining a suitable formula for direct compression, depending on the SeDeM properties of the active pharmaceutical ingredient (API) to be used. The results obtained confirm that the SeDeM Method is an appropriate system, effective tool for determining a viable formulation for tablets prepared by direct compression, and can thus be used as the basis for the relevant pharmaceutical development.

  2. Variable-pulse-shape pulsed-power accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoltzfus, Brian S.; Austin, Kevin; Hutsel, Brian Thomas

    A variable-pulse-shape pulsed-power accelerator is driven by a large number of independent LC drive circuits. Each LC circuit drives one or more coaxial transmission lines that deliver the circuit's output power to several water-insulated radial transmission lines that are connected in parallel at small radius by a water-insulated post-hole convolute. The accelerator can be impedance matched throughout. The coaxial transmission lines are sufficiently long to transit-time isolate the LC drive circuits from the water-insulated transmission lines, which allows each LC drive circuit to be operated without being affected by the other circuits. This enables the creation of any power pulsemore » that can be mathematically described as a time-shifted linear combination of the pulses of the individual LC drive circuits. Therefore, the output power of the convolute can provide a variable pulse shape to a load that can be used for magnetically driven, quasi-isentropic compression experiments and other applications.« less

  3. Strong convective storm nowcasting using a hybrid approach of convolutional neural network and hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Jiang, Ling; Han, Lei

    2018-04-01

    Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.

  4. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  5. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  6. John Tracy Clinic: Programa de Ensenanza por Correspondencia para Los Padres de Ninos Sordo-Ciegos de Edad Preescolar (John Tracy Clinic Correspondence Learning Program for Parents of Preschool Deaf-Blind Children).

    ERIC Educational Resources Information Center

    Thielman, Virginia B.; And Others

    Written in Spanish, the document contains a correspondence learning program for parents of deaf blind preschoolers. An introductory section gives preliminary instructions, an introduction to sign language, and a list of resources for deaf blind children. Twelve lessons follow with information on: the parent's role in teaching the child, visual…

  7. Quality of cardio-pulmonary resuscitation (CPR) during paediatric resuscitation training: time to stop the blind leading the blind.

    PubMed

    Arshid, Muhammad; Lo, Tsz-Yan Milly; Reynolds, Fiona

    2009-05-01

    Recent evidence suggested that the quality of cardio-pulmonary resuscitation (CPR) during adult advanced life support training was suboptimal. This study aimed to assess the CPR quality of a paediatric resuscitation training programme, and to determine whether it was sufficiently addressed by the trainee team leaders during training. CPR quality of 20 consecutive resuscitation scenario training sessions was audited prospectively using a pre-designed proforma. A consultant intensivist and a senior nurse who were also Advanced Paediatric Life Support (APLS) instructors assessed the CPR quality which included ventilation frequency, chest compression rate and depth, and any unnecessary interruption in chest compressions. Team leaders' response to CPR quality and elective change of compression rescuer during training were also recorded. Airway patency was not assessed in 13 sessions while ventilation rate was too fast in 18 sessions. Target compression rate was not achieved in only 1 session. The median chest compression rate was 115 beats/min. Chest compressions were too shallow in 10 sessions and were interrupted unnecessarily in 13 sessions. More than 50% of training sessions did not have elective change of the compression rescuer. 19 team leaders failed to address CPR quality during training despite all team leaders being certified APLS providers. The quality of CPR performance was suboptimal during paediatric resuscitation training and team leaders-in-training had little awareness of this inadequacy. Detailed CPR quality assessment and feedback should be integrated into paediatric resuscitation training to ensure optimal performance in real life resuscitations.

  8. Development of multiple-unit pellet system tablets by employing the SeDeM expert diagram system I: pellets with different sizes.

    PubMed

    Hamman, Hannlie; Hamman, Josias; Wessels, Anita; Scholtz, Jacques; Steenekamp, Jan Harm

    2017-07-03

    Multiple-unit pellet systems (MUPS) provide several pharmacokinetic and pharmacodynamic advantages over single-unit dosage forms, however, compression of pellets into MUPS tablets present certain challenges. Although the SeDeM Expert Diagram System (SeDeM EDS) was originally developed to provide information about the most appropriate excipient and the minimum amount thereof that is required for producing direct compressible tablets, this study investigated the possibility to apply the SeDeM EDS in the production of MUPS tablets. In addition, the effect of pellet size (i.e. 0.5, 1.0, 1.5, 2.0, and 2.5 mm) on SeDeM EDS predictions regarding the MUPS tablet formulations was investigated. The compressibility incidence factor values were below the acceptable value (i.e. 5.00) for all the pellet sizes. Kollidon ® VA 64 was identified as the most appropriate excipient to improve compressibility. The compression indices, namely, the parameter index (IP), parametric profile index (IPP), and good compression index (GCI) indicated that acceptable MUPS tablets could be produced from the final pellet-excipient blends based on predictions from the SeDeM EDS. These MUPS tablets complied with specifications for friability, hardness, and mass variation. The SeDeM EDS system is therefore applicable to assist in the formulation of acceptable MUPS tablets.

  9. Heart to Heart: Parents of Blind and Partially Sighted Children Talk about Their Feelings = De Corazon a Corazon: Padres de Ninos Ciegos y Parcialmente Ciegos Hablan acerca de Sus Sentimientos.

    ERIC Educational Resources Information Center

    Blind Childrens Center, Los Angeles, CA.

    English and Spanish versions of this booklet describe typical feelings experienced by parents of blind and partially sighted children. Experiences are cited including first feelings of shock and confusion, days of dramatic ups and downs, need to find a reason for the blindness, self doubts and anxiety, and reactions from strangers. In closing, the…

  10. Design and Development of Basic Physical Layer WiMAX Network Simulation Models

    DTIC Science & Technology

    2009-01-01

    Wide Web . The third software version was developed during the period of 22 August to 4 November, 2008. The software version developed during the...researched on the Web . The mathematics of some fundamental concepts such as Fourier transforms, convolutional coding techniques were also reviewed...Mathworks Matlab users’ website. A simulation model was found, entitled Estudio y Simulacion de la capa Jisica de la norma 802.16 ( Sistema WiMAX) developed

  11. Control of Lower Extremity Edema in Patients with Diabetes: Double Blind Randomized Controlled Trial Assessing the Efficacy of Mild Compression Diabetic Socks

    PubMed Central

    Wu, Stephanie C.; Crews, Ryan T.; Skratsky, Melissa; Overstreet, Julia; Yalla, Sai V.; Winder, Michelle; Ortiz, Jacquelyn; Andersen, Charles A.

    2017-01-01

    Aims Persons with diabetes frequently present with lower extremity (LE) edema; however, compression therapy is generally avoided for fear of compromising arterial circulation in a population with a high prevalence of peripheral arterial disease. This double blind randomized controlled trial (RCT) assessed whether diabetic socks with mild compression could reduce LE edema in patients with diabetes without negatively impacting vascularity. Methods Eighty subjects with LE edema and diabetes were randomized to receive either mild-compression knee high diabetic socks (18–25mmHg) or non-compression knee high diabetic socks. Subjects were instructed to wear the socks during all waking hours. Follow-up visits occurred weekly for four consecutive weeks. Edema was quantified through midfoot, ankle, and calf circumferences and cutaneous fluid measurements. Vascular status was tracked via ankle brachial index (ABI), toe brachial index (TBI), and skin perfusion pressure (SPP). Results Seventy-seven subjects (39 controls and 38 mild-compression subjects) successfully completed the study. No statistical differences between the two groups in terms of age, body mass index, gender, and ethnicity. Repeated measures analysis of variance and Sidak corrections for multiple comparisons were used for data analyses. Subjects randomized to mild-compression diabetic socks demonstrated significant decreases in calf and ankle circumferences at the end of treatment as compared to baseline. LE circulation did not diminish throughout the study with no significant decreases in ABI, TBI or SPP for either group. Conclusions Results of this RCT suggest that mild compression diabetic sock may be effectively and safely used in patients with diabetes and LE edema. PMID:28315576

  12. Control of lower extremity edema in patients with diabetes: Double blind randomized controlled trial assessing the efficacy of mild compression diabetic socks.

    PubMed

    Wu, Stephanie C; Crews, Ryan T; Skratsky, Melissa; Overstreet, Julia; Yalla, Sai V; Winder, Michelle; Ortiz, Jacquelyn; Andersen, Charles A

    2017-05-01

    Persons with diabetes frequently present with lower extremity (LE) edema; however, compression therapy is generally avoided for fear of compromising arterial circulation in a population with a high prevalence of peripheral arterial disease. This double blind randomized controlled trial (RCT) assessed whether diabetic socks with mild compression could reduce LE edema in patients with diabetes without negatively impacting vascularity. Eighty subjects with LE edema and diabetes were randomized to receive either mild-compression knee high diabetic socks (18-25mmHg) or non-compression knee high diabetic socks. Subjects were instructed to wear the socks during all waking hours. Follow-up visits occurred weekly for four consecutive weeks. Edema was quantified through midfoot, ankle, and calf circumferences and cutaneous fluid measurements. Vascular status was tracked via ankle brachial index (ABI), toe brachial index (TBI), and skin perfusion pressure (SPP). Seventy-seven subjects (39 controls and 38 mild-compression subjects) successfully completed the study. No statistical differences between the two groups in terms of age, body mass index, gender, and ethnicity. Repeated measures analysis of variance and Sidak corrections for multiple comparisons were used for data analyses. Subjects randomized to mild-compression diabetic socks demonstrated significant decreases in calf and ankle circumferences at the end of treatment as compared to baseline. LE circulation did not diminish throughout the study with no significant decreases in ABI, TBI or SPP for either group. Results of this RCT suggest that mild compression diabetic socks may be effectively and safely used in patients with diabetes and LE edema. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. [Effects of a voice metronome on compression rate and depth in telephone assisted, bystander cardiopulmonary resuscitation: an investigator-blinded, 3-armed, randomized, simulation trial].

    PubMed

    van Tulder, Raphael; Roth, Dominik; Krammel, Mario; Laggner, Roberta; Schriefl, Christoph; Kienbacher, Calvin; Lorenzo Hartmann, Alexander; Novosad, Heinz; Constantin Chwojka, Christof; Havel, Christoph; Schreiber, Wolfgang; Herkner, Harald

    2015-01-01

    We investigated the effect on compression rate and depth of a conventional metronome and a voice metronome in simulated telephone-assisted, protocol-driven bystander Cardiopulmonary resucitation (CPR) compared to standard instruction. Thirty-six lay volunteers performed 10 minutes of compression-only CPR in a prospective, investigator-blinded, 3-arm study on a manikin. Participants were randomized either to standard instruction ("push down firmly, 5 cm"), a regular metronome pacing 110 beats per minute (bpm), or a voice metronome continuously prompting "deep-deepdeep- deeper" at 110 bpm. The primary outcome was deviation from the ideal chest compression target range (50 mm compression depth x 100 compressions per minute x 10 minutes = 50 m). Secondary outcomes were CPR quality measures (compression and leaning depth, rate, no-flow times) and participants' related physiological response (heart rate, blood pressure and nine hole peg test and borg scales score). We used a linear regression model to calculate effects. The mean (SD) deviation from the ideal target range (50 m) was -11 (9) m in the standard group, -20 (11) m in the conventional metronome group (adjusted difference [95%, CI], 9.0 [1.2-17.5 m], P=.03), and -18 (9) m in the voice metronome group (adjusted difference, 7.2 [-0.9-15.3] m, P=.08). Secondary outcomes (CPR quality measures and physiological response of participants to CPR performance) showed no significant differences. Compared to standard instruction, the conventional metronome showed a significant negative effect on the chest compression target range. The voice metronome showed a non-significant negative effect and therefore cannot be recommended for regular use in telephone-assisted CPR.

  14. Reduction in Wound Complications After Total Ankle Arthroplasty Using a Compression Wrap Protocol.

    PubMed

    Schipper, Oliver N; Hsu, Andrew R; Haddad, Steven L

    2015-12-01

    The purpose of this study was to evaluate the clinical differences in wound complications after total ankle arthroplasty (TAA) between a cohort of patients that received a compression wrap protocol and a historical control group treated with cast immobilization. Patient charts and postoperative wound pictures were reviewed for 42 patients who underwent a compression wrap protocol and 50 patients who underwent circumferential casting after primary TAA from 2008 to 2013. A blinded reviewer graded each wound using a novel postoperative wound classification system, and recorded whether the wound was completely healed by or after 3 months. A second blinded review was performed to determine intraobserver reliability. Mean patient age was 55 years (range, 24-80) and all patients had at least 6-month follow-up. There were significantly more total wound complications (P = .02) and mild wound complications (P = .02) in the casted group compared to the compression wrap group. There were no significant differences in the number of moderate and severe complications between each group. A significantly higher proportion of TAA incisions took longer than 3 months to heal in the casted group (P = .02). Based on our clinical experience with postoperative wound care after TAA, use of a compression wrap protocol was safe and effective at reducing wound-related complications, and well tolerated by patients. Further prospective, randomized clinical trials are warranted to evaluate the utility and cost-effectiveness of a compression wrap protocol after TAA. © The Author(s) 2015.

  15. Blind deconvolution of astronomical images with band limitation determined by optical system parameters

    NASA Astrophysics Data System (ADS)

    Luo, L.; Fan, M.; Shen, M. Z.

    2007-07-01

    Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.

  16. A hybrid data compression approach for online backup service

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  17. The Communication Link and Error ANalysis (CLEAN) simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.; Crowe, Shane

    1993-01-01

    During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

  18. Ramp compression of a metallic liner driven by a shaped 5 MA current on the SPHINX machine

    NASA Astrophysics Data System (ADS)

    d'Almeida, T.; Lassalle, F.; Morell, A.; Grunenwald, J.; Zucchini, F.; Loyen, A.; Maysonnave, T.; Chuvatin, A.

    2014-05-01

    SPHINX is a 6MA, 1-us Linear Transformer Driver operated by the CEA Gramat (France) and primarily used for imploding Z-pinch loads for radiation effects studies. A method for performing magnetic ramp compression experiments was developed using a compact Dynamic Load Current Multiplier inserted between the convolute and the load, to shape the initial current pulse. We present the overall experimental configuration chosen for these experiments and initial results obtained over a set of experiments on an aluminum cylindrical liner. Current profiles measured at various critical locations across the system, are in good agreement with simulated current profiles. The liner inner free surface velocity measurements agree with the hydrocode results obtained using the measured load current as the input. The potential of the technique in terms of applications and achievable ramp pressure levels lies in the prospects for improving the DLCM efficiency.

  19. Alfabetizacion de las personas que son sordas e invidentes. Hoja informativa de DB-LINK (Literacy for Persons Who Are Deaf-Blind. DB-LINK Fact Sheet).

    ERIC Educational Resources Information Center

    Miles, Barbara

    This fact sheet discusses the importance of literacy for individuals who are deaf-blind, the social functions of reading and writing, and conditions necessary for the development of literacy. Strategies for promoting literacy among this population are described and include: (1) invite children and adults who are deaf-blind to observe as you use…

  20. Glaucoma (image)

    MedlinePlus

    Glaucoma is a condition of increased fluid pressure inside the eye. The increased pressure causes compression of ... nerve which can eventually lead to nerve damage. Glaucoma can cause partial vision loss, with blindness as ...

  1. Compression fractures of the back

    MedlinePlus

    ... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...

  2. Protocol for a pilot randomised controlled clinical trial to compare the effectiveness of a graduated three layer straight tubular bandaging system when compared to a standard short stretch compression bandaging system in the management of people with venous ulceration: 3VSS2008

    PubMed Central

    2010-01-01

    Background The incidence of venous ulceration is rising with the increasing age of the general population. Venous ulceration represents the most prevalent form of difficult to heal wounds and these problematic wounds require a significant amount of health care resources for treatment. Based on current knowledge multi-layer high compression system is described as the gold standard for treating venous ulcers. However, to date, despite our advances in venous ulcer therapy, no convincing low cost compression therapy studies have been conducted and there are no clear differences in the effectiveness of different types of high compression. Methods/Design The trial is designed as a pilot multicentre open label parallel group randomised trial. Male and female participants aged greater than 18 years with a venous ulcer confirmed by clinical assessment will be randomised to either the intervention compression bandage which consists of graduated lengths of 3 layers of elastic tubular compression bandage or to the short stretch inelastic compression bandage (control). The primary objective is to assess the percentage wound reduction from baseline compared to week 12 following randomisation. Randomisation will be allocated via a web based central independent randomisation service (nQuery v7) and stratified by study centre and wound size ≤ 10 cm2 or >10 cm2. Neither participants nor study staff will be blinded to treatment. Outcome assessments will be undertaken by an assessor who is blinded to the randomisation process. Discussion The aim of this study is to evaluate the efficacy and safety of two compression bandages; graduated three layer straight tubular bandaging (3L) when compared to standard short stretch (SS) compression bandaging in healing venous ulcers in patients with chronic venous ulceration. The trial investigates the differences in clinical outcomes of two currently accepted ways of treating people with venous ulcers. This study will help answer the question whether the 3L compression system or the SS compression system is associated with better outcomes. Trial Registration ACTRN12608000599370 PMID:20214822

  3. High Critical Current in Metal Organic Derived YBCO Films

    DTIC Science & Technology

    2010-10-31

    process, particularly in reel-to- reel manufacturing equipment. During Phase I, a “three-step” conversion process was developed to de- convolute the...Task 3. After reaction, the 40-mm web was coated on both sides with a silver layer then slit into eight 4-mm width tapes which were laminated between

  4. The target-specific transporter and current status of diuretics as antihypertensive.

    PubMed

    Ali, Syed Salman; Sharma, Pramod Kumar; Garg, Vipin Kumar; Singh, Avnesh Kumar; Mondal, Sambhu Charan

    2012-04-01

    The currently available diuretics increase the urinary excretion of sodium chloride by selective inhibition of specific sodium transporters in the loop of Henle and distal nephron. In recent years, the molecular cloning of the diuretic-sensitive sodium transporters at distal convoluted tubule has improved our understanding of the cellular mechanisms of action of each class of diuretics. Diuretics are tools of considerable therapeutic importance. First, they effectively reduce blood pressure. Loop and thiazide diuretics are secreted from the proximal tubule via the organic anion transporter-1 and exert their diuretic action by binding to the Na(+)-K(+)-2Cl(-) co-transporter type 2 in the thick ascending limb and the Na(+)-Cl(-) co-transporter in the distal convoluted tubule, respectively. Recent studies in animal models suggest that abundance of these ion transporters is affected by long-term diuretic administration. The WHO/ISH guidelines point out that diuretics enhance the efficacy of antihypertensive drugs and will most often be a component of combination therapy. © 2011 The Authors Fundamental and Clinical Pharmacology © 2011 Société Française de Pharmacologie et de Thérapeutique.

  5. JP3D compressed-domain watermarking of volumetric medical data sets

    NASA Astrophysics Data System (ADS)

    Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian

    2010-01-01

    Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.

  6. Distance Education Technology for the New Millennium Compressed Video Teaching. ZIFF Papiere 101.

    ERIC Educational Resources Information Center

    Keegan, Desmond

    This monograph combines an examination of theoretical issues raised by the introduction of two-way video and similar systems into distance education (DE) with practical advice on using compressed video systems in DE programs. Presented in the first half of the monograph are the following: analysis of the intrinsic links between DE and technology…

  7. Effects of repetitive or intensified instructions in telephone assisted, bystander cardiopulmonary resuscitation: an investigator-blinded, 4-armed, randomized, factorial simulation trial.

    PubMed

    van Tulder, R; Roth, D; Krammel, M; Laggner, R; Heidinger, B; Kienbacher, C; Novosad, H; Chwojka, C; Havel, C; Sterz, F; Schreiber, W; Herkner, H

    2014-01-01

    Compression depth is frequently suboptimal in cardiopulmonary resuscitation (CPR). We investigated effects of intensified wording and/or repetitive target depth instructions on compression depth in telephone-assisted, protocol driven, bystander CPR on a simulation manikin. Thirty-two volunteers performed 10 min of compression only-CPR in a prospective, investigator-blinded, 4-armed, factorial setting. Participants were randomized either to standard wording ("push down firmly 5 cm"), intensified wording ("it is very important to push down 5 cm every time") or standard or intensified wording repeated every 20s. Three dispatchers were randomized to give these instructions. Primary outcome was relative compression depth (absolute compression depth minus leaning depth). Secondary outcomes were absolute distance, hands-off times as well as BORG-scale and nine-hole peg test (NHPT), pulse rate and blood pressure to reflect physical exertion. We applied a random effects linear regression model. Relative compression depth was 35 ± 10 mm (standard) versus 31 ± 11 mm (intensified wording) versus 25 ± 8 mm (repeated standard) and 31 ± 14 mm (repeated intensified wording). Adjusted for design, body mass index and female sex, intensified wording and repetition led to decreased compression depth of 13 (95%CI -25 to -1) mm (p=0.04) and 9 (95%CI -21 to 3) mm (p=0.13), respectively. Secondary outcomes regarding intensified wording showed significant differences for absolute distance (43 ± 2 versus 20 (95%CI 3-37) mm; p=0.01) and hands-off times (60 ± 40 versus 157 (95%CI 63-251) s; p=0.04). In protocol driven, telephone-assisted, bystander CPR, intensified wording and/or repetitive target depth instruction will not improve compression depth compared to the standard instruction. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Efficient 3D Watermarked Video Communication with Chaotic Interleaving, Convolution Coding, and LMMSE Equalization

    NASA Astrophysics Data System (ADS)

    El-Shafai, W.; El-Bakary, E. M.; El-Rabaie, S.; Zahran, O.; El-Halawany, M.; Abd El-Samie, F. E.

    2017-06-01

    Three-Dimensional Multi-View Video (3D-MVV) transmission over wireless networks suffers from Macro-Blocks losses due to either packet dropping or fading-motivated bit errors. Thus, the robust performance of 3D-MVV transmission schemes over wireless channels becomes a recent considerable hot research issue due to the restricted resources and the presence of severe channel errors. The 3D-MVV is composed of multiple video streams shot by several cameras around a single object, simultaneously. Therefore, it is an urgent task to achieve high compression ratios to meet future bandwidth constraints. Unfortunately, the highly-compressed 3D-MVV data becomes more sensitive and vulnerable to packet losses, especially in the case of heavy channel faults. Thus, in this paper, we suggest the application of a chaotic Baker interleaving approach with equalization and convolution coding for efficient Singular Value Decomposition (SVD) watermarked 3D-MVV transmission over an Orthogonal Frequency Division Multiplexing wireless system. Rayleigh fading and Additive White Gaussian Noise are considered in the real scenario of 3D-MVV transmission. The SVD watermarked 3D-MVV frames are primarily converted to their luminance and chrominance components, which are then converted to binary data format. After that, chaotic interleaving is applied prior to the modulation process. It is used to reduce the channel effects on the transmitted bit streams and it also adds a degree of encryption to the transmitted 3D-MVV frames. To test the performance of the proposed framework; several simulation experiments on different SVD watermarked 3D-MVV frames have been executed. The experimental results show that the received SVD watermarked 3D-MVV frames still have high Peak Signal-to-Noise Ratios and watermark extraction is possible in the proposed framework.

  9. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    PubMed

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  10. Multichannel blind iterative image restoration.

    PubMed

    Sroubek, Filip; Flusser, Jan

    2003-01-01

    Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.

  11. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  12. A VLSI decomposition of the deBruijn graph

    NASA Technical Reports Server (NTRS)

    Collins, O.; Dolinar, S.; Mceliece, R.; Pollara, F.

    1990-01-01

    A new Viterbi decoder for convolutional codes with constraint lengths up to 15, called the Big Viterbi Decoder, is under development for the Deep Space Network. It will be demonstrated by decoding data from the Galileo spacecraft, which has a rate 1/4, constraint-length 15 convolutional encoder on board. Here, the mathematical theory underlying the design of the very-large-scale-integrated (VLSI) chips that are being used to build this decoder is explained. The deBruijn graph B sub n describes the topology of a fully parallel, rate 1/v, constraint length n+2 Viterbi decoder, and it is shown that B sub n can be built by appropriately wiring together (i.e., connecting together with extra edges) many isomorphic copies of a fixed graph called a B sub n building block. The efficiency of such a building block is defined as the fraction of the edges in B sub n that are present in the copies of the building block. It is shown, among other things, that for any alpha less than 1, there exists a graph G which is a B sub n building block of efficiency greater than alpha for all sufficiently large n. These results are illustrated by describing a special hierarchical family of deBruijn building blocks, which has led to the design of the gate-array chips being used in the Big Viterbi Decoder.

  13. A note on the blind deconvolution of multiple sparse signals from unknown subspaces

    NASA Astrophysics Data System (ADS)

    Cosse, Augustin

    2017-08-01

    This note studies the recovery of multiple sparse signals, xn ∈ ℝL, n = 1, . . . , N, from the knowledge of their convolution with an unknown point spread function h ∈ ℝL. When the point spread function is known to be nonzero, |h[k]| > 0, this blind deconvolution problem can be relaxed into a linear, ill-posed inverse problem in the vector concatenating the unknown inputs xn together with the inverse of the filter, d ∈ ℝL where d[k] := 1/h[k]. When prior information is given on the input subspaces, the resulting overdetermined linear system can be solved efficiently via least squares (see Ling et al. 20161). When no information is given on those subspaces, and the inputs are only known to be sparse, it still remains possible to recover these inputs along with the filter by considering an additional l1 penalty. This note certifies exact recovery of both the unknown PSF and unknown sparse inputs, from the knowledge of their convolutions, as soon as the number of inputs N and the dimension of each input, L , satisfy L ≳ N and N ≳ T2max, up to log factors. Here Tmax = maxn{Tn} and Tn, n = 1, . . . , N denote the supports of the inputs xn. Our proof system combines the recent results on blind deconvolution via least squares to certify invertibility of the linear map encoding the convolutions, with the construction of a dual certificate following the structure first suggested in Candés et al. 2007.2 Unlike in these papers, however, it is not possible to rely on the norm ||(A*TAT)-1|| to certify recovery. We instead use a combination of the Schur Complement and Neumann series to compute an expression for the inverse (A*TAT)-1. Given this expression, it is possible to show that the poorly scaled blocks in (A*TAT)-1 are multiplied by the better scaled ones or vanish in the construction of the certificate. Recovery is certified with high probablility on the choice of the supports and distribution of the signs of each input xn on the support. The paper follows the line of previous work by Wang et al. 20163 where the authors guarantee recovery for subgaussian × Bernoulli inputs satisfying 𝔼xn|k| ∈ [1/10, 1] as soon as N ≳ L. Examples of applications include seismic imaging with unknown source or marine seismic data deghosting, magnetic resonance autocalibration or multiple channel estimation in communication. Numerical experiments are provided along with a discussion on the sample complexity tightness.

  14. Enriched Air Nitrox Breathing Reduces Venous Gas Bubbles after Simulated SCUBA Diving: A Double-Blind Cross-Over Randomized Trial.

    PubMed

    Souday, Vincent; Koning, Nick J; Perez, Bruno; Grelon, Fabien; Mercat, Alain; Boer, Christa; Seegers, Valérie; Radermacher, Peter; Asfar, Pierre

    2016-01-01

    To test the hypothesis whether enriched air nitrox (EAN) breathing during simulated diving reduces decompression stress when compared to compressed air breathing as assessed by intravascular bubble formation after decompression. Human volunteers underwent a first simulated dive breathing compressed air to include subjects prone to post-decompression venous gas bubbling. Twelve subjects prone to bubbling underwent a double-blind, randomized, cross-over trial including one simulated dive breathing compressed air, and one dive breathing EAN (36% O2) in a hyperbaric chamber, with identical diving profiles (28 msw for 55 minutes). Intravascular bubble formation was assessed after decompression using pulmonary artery pulsed Doppler. Twelve subjects showing high bubble production were included for the cross-over trial, and all completed the experimental protocol. In the randomized protocol, EAN significantly reduced the bubble score at all time points (cumulative bubble scores: 1 [0-3.5] vs. 8 [4.5-10]; P < 0.001). Three decompression incidents, all presenting as cutaneous itching, occurred in the air versus zero in the EAN group (P = 0.217). Weak correlations were observed between bubble scores and age or body mass index, respectively. EAN breathing markedly reduces venous gas bubble emboli after decompression in volunteers selected for susceptibility for intravascular bubble formation. When using similar diving profiles and avoiding oxygen toxicity limits, EAN increases safety of diving as compared to compressed air breathing. ISRCTN 31681480.

  15. Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2017-10-01

    The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.

  16. Compression of next-generation sequencing reads aided by highly efficient de novo assembly

    PubMed Central

    Jones, Daniel C.; Ruzzo, Walter L.; Peng, Xinxia

    2012-01-01

    We present Quip, a lossless compression algorithm for next-generation sequencing data in the FASTQ and SAM/BAM formats. In addition to implementing reference-based compression, we have developed, to our knowledge, the first assembly-based compressor, using a novel de novo assembly algorithm. A probabilistic data structure is used to dramatically reduce the memory required by traditional de Bruijn graph assemblers, allowing millions of reads to be assembled very efficiently. Read sequences are then stored as positions within the assembled contigs. This is combined with statistical compression of read identifiers, quality scores, alignment information and sequences, effectively collapsing very large data sets to <15% of their original size with no loss of information. Availability: Quip is freely available under the 3-clause BSD license from http://cs.washington.edu/homes/dcjones/quip. PMID:22904078

  17. Human Brain Organoids on a Chip Reveal the Physics of Folding.

    PubMed

    Karzbrun, Eyal; Kshirsagar, Aditya; Cohen, Sidney R; Hanna, Jacob H; Reiner, Orly

    2018-05-01

    Human brain wrinkling has been implicated in neurodevelopmental disorders and yet its origins remain unknown. Polymer gel models suggest that wrinkling emerges spontaneously due to compression forces arising during differential swelling, but these ideas have not been tested in a living system. Here, we report the appearance of surface wrinkles during the in vitro development and self-organization of human brain organoids in a micro-fabricated compartment that supports in situ imaging over a timescale of weeks. We observe the emergence of convolutions at a critical cell density and maximal nuclear strain, which are indicative of a mechanical instability. We identify two opposing forces contributing to differential growth: cytoskeletal contraction at the organoid core and cell-cycle-dependent nuclear expansion at the organoid perimeter. The wrinkling wavelength exhibits linear scaling with tissue thickness, consistent with balanced bending and stretching energies. Lissencephalic (smooth brain) organoids display reduced convolutions, modified scaling and a reduced elastic modulus. Although the mechanism here does not include the neuronal migration seen in in vivo , it models the physics of the folding brain remarkably well. Our on-chip approach offers a means for studying the emergent properties of organoid development, with implications for the embryonic human brain.

  18. Human brain organoids on a chip reveal the physics of folding

    NASA Astrophysics Data System (ADS)

    Karzbrun, Eyal; Kshirsagar, Aditya; Cohen, Sidney R.; Hanna, Jacob H.; Reiner, Orly

    2018-05-01

    Human brain wrinkling has been implicated in neurodevelopmental disorders and yet its origins remain unknown. Polymer gel models suggest that wrinkling emerges spontaneously due to compression forces arising during differential swelling, but these ideas have not been tested in a living system. Here, we report the appearance of surface wrinkles during the in vitro development and self-organization of human brain organoids in a microfabricated compartment that supports in situ imaging over a timescale of weeks. We observe the emergence of convolutions at a critical cell density and maximal nuclear strain, which are indicative of a mechanical instability. We identify two opposing forces contributing to differential growth: cytoskeletal contraction at the organoid core and cell-cycle-dependent nuclear expansion at the organoid perimeter. The wrinkling wavelength exhibits linear scaling with tissue thickness, consistent with balanced bending and stretching energies. Lissencephalic (smooth brain) organoids display reduced convolutions, modified scaling and a reduced elastic modulus. Although the mechanism here does not include the neuronal migration seen in vivo, it models the physics of the folding brain remarkably well. Our on-chip approach offers a means for studying the emergent properties of organoid development, with implications for the embryonic human brain.

  19. Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition

    NASA Astrophysics Data System (ADS)

    Alouges, François; Aussal, Matthieu; Parolin, Emile

    2017-07-01

    This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.

  20. Reference-free compression of high throughput sequencing data with a probabilistic de Bruijn graph.

    PubMed

    Benoit, Gaëtan; Lemaitre, Claire; Lavenier, Dominique; Drezen, Erwan; Dayris, Thibault; Uricaru, Raluca; Rizk, Guillaume

    2015-09-14

    Data volumes generated by next-generation sequencing (NGS) technologies is now a major concern for both data storage and transmission. This triggered the need for more efficient methods than general purpose compression tools, such as the widely used gzip method. We present a novel reference-free method meant to compress data issued from high throughput sequencing technologies. Our approach, implemented in the software LEON, employs techniques derived from existing assembly principles. The method is based on a reference probabilistic de Bruijn Graph, built de novo from the set of reads and stored in a Bloom filter. Each read is encoded as a path in this graph, by memorizing an anchoring kmer and a list of bifurcations. The same probabilistic de Bruijn Graph is used to perform a lossy transformation of the quality scores, which allows to obtain higher compression rates without losing pertinent information for downstream analyses. LEON was run on various real sequencing datasets (whole genome, exome, RNA-seq or metagenomics). In all cases, LEON showed higher overall compression ratios than state-of-the-art compression software. On a C. elegans whole genome sequencing dataset, LEON divided the original file size by more than 20. LEON is an open source software, distributed under GNU affero GPL License, available for download at http://gatb.inria.fr/software/leon/.

  1. Role of dominant versus non-dominant hand position during uninterrupted chest compression CPR by novice rescuers: a randomized double-blind crossover study.

    PubMed

    Nikandish, Reza; Shahbazi, Sharbanoo; Golabi, Sedigheh; Beygi, Najimeh

    2008-02-01

    Previous research has suggested improved quality of chest compressions when the dominant hand was in contact with the sternum. However, the study was in health care professionals and during conventional chest compression-ventilation CPR. The aim of this study was to test the hypothesis, in null form, that the quality of external chest compressions (ECC) in novice rescuers during 5min of uninterrupted chest compression CPR (UCC-CPR) is independent of the hand in contact with the sternum. Confirmation of the hypothesis would allow the use of either hand by the novice rescuers during UCC-CPR. Fifty-nine first year public heath students participated in this randomised double-blind crossover study. After completion of a standard adult BLS course, they performed single rescuer adult UCC-CPR for 5 min on a recording Resusci Anne. One week later they changed the hand of contact with the sternum while performing ECC. The quality of ECC was recorded by the skill meter for the dominant and non-dominant hand during 5 min ECC. The total number of correct chest compressions in the dominant hand group (DH), mean 183+/-152, was not statistically different from the non-dominant hand group (NH), mean 152+/-135 (P=0.09). The number of ECC with inadequate depth in the DH group, mean 197+/-174 and NH group, mean 196+/-173 were comparable (P=0.1). The incidence of ECC exceeding the recommended depth in the DH group, mean 51+/-110 and NH group, mean 32+/-75 were comparable (P=0.1). Although there is a trend to increased incidence of correct chest compressions with positioning the dominant hand in contact with the sternum, it does not reach statistical significance during UCC-CPR by the novice rescuers for 5 min.

  2. Tandem mass spectrometry data quality assessment by self-convolution.

    PubMed

    Choo, Keng Wah; Tham, Wai Mun

    2007-09-20

    Many algorithms have been developed for deciphering the tandem mass spectrometry (MS) data sets. They can be essentially clustered into two classes. The first performs searches on theoretical mass spectrum database, while the second based itself on de novo sequencing from raw mass spectrometry data. It was noted that the quality of mass spectra affects significantly the protein identification processes in both instances. This prompted the authors to explore ways to measure the quality of MS data sets before subjecting them to the protein identification algorithms, thus allowing for more meaningful searches and increased confidence level of proteins identified. The proposed method measures the qualities of MS data sets based on the symmetric property of b- and y-ion peaks present in a MS spectrum. Self-convolution on MS data and its time-reversal copy was employed. Due to the symmetric nature of b-ions and y-ions peaks, the self-convolution result of a good spectrum would produce a highest mid point intensity peak. To reduce processing time, self-convolution was achieved using Fast Fourier Transform and its inverse transform, followed by the removal of the "DC" (Direct Current) component and the normalisation of the data set. The quality score was defined as the ratio of the intensity at the mid point to the remaining peaks of the convolution result. The method was validated using both theoretical mass spectra, with various permutations, and several real MS data sets. The results were encouraging, revealing a high percentage of positive prediction rates for spectra with good quality scores. We have demonstrated in this work a method for determining the quality of tandem MS data set. By pre-determining the quality of tandem MS data before subjecting them to protein identification algorithms, spurious protein predictions due to poor tandem MS data are avoided, giving scientists greater confidence in the predicted results. We conclude that the algorithm performs well and could potentially be used as a pre-processing for all mass spectrometry based protein identification tools.

  3. Symptoms in Response to Controlled Diesel Exhaust More Closely Reflect Exposure Perception Than True Exposure

    PubMed Central

    Carlsten, Chris; Oron, Assaf P.; Curtiss, Heidi; Jarvis, Sara; Daniell, William; Kaufman, Joel D.

    2013-01-01

    Background Diesel exhaust (DE) exposures are very common, yet exposure-related symptoms haven’t been rigorously examined. Objective Describe symptomatic responses to freshly generated and diluted DE and filtered air (FA) in a controlled human exposure setting; assess whether such responses are altered by perception of exposure. Methods 43 subjects participated within three double-blind crossover experiments to order-randomized DE exposure levels (FA and DE calibrated at 100 and/or 200 micrograms/m3 particulate matter of diameter less than 2.5 microns), and completed questionnaires regarding symptoms and dose perception. Results For a given symptom cluster, the majority of those exposed to moderate concentrations of diesel exhaust do not report such symptoms. The most commonly reported symptom cluster was of the nose (29%). Blinding to exposure is generally effective. Perceived exposure, rather than true exposure, is the dominant modifier of symptom reporting. Conclusion Controlled human exposure to moderate-dose diesel exhaust is associated with a range of mild symptoms, though the majority of individuals will not experience any given symptom. Blinding to DE exposure is generally effective. Perceived DE exposure, rather than true DE exposure, is the dominant modifier of symptom reporting. PMID:24358296

  4. Clinical validation of different echocardiographic motion pictures expert group-4 algorythms and compression levels for telemedicine.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Cavoretto, Dario; Celeste, Fabrizio; Muratori, Manuela; Guazzi, Maurizio D

    2004-01-01

    Tele-echocardiography is not widely used because of lengthy transmission times when using standard Motion Pictures Expert Groups (MPEG)-2 lossy compression algorythms, unless expensive high bandwidth lines are used. We sought to validate the newer MPEG-4 algorythms to allow further reduction in echocardiographic motion video file size. Four cardiologists expert in echocardiography read blindly 165 randomized uncompressed and compressed 2D and color Doppler normal and pathologic motion images. One Digital Video and 3 MPEG-4 compression algorythms were tested, the latter at 3 decreasing compression quality levels (100%, 65% and 40%). Mean diagnostic and image quality scores were computed for each file and compared across the 3 compression levels using uncompressed files as controls. File dimensions decreased from a range of uncompressed 12-83 MB to MPEG-4 0.03-2.3 MB. All algorythms showed mean scores that were not significantly different from uncompressed source, except the MPEG-4 DivX algorythm at the highest selected compression (40%, p=.002). These data support the use of MPEG-4 compression to reduce echocardiographic motion image size for transmission purposes, allowing cost reduction through use of low bandwidth lines.

  5. Advantages and disadvantages of graduated and inverse graduated compression hosiery in patients with chronic venous insufficiency and healthy volunteers: A prospective, mono-centric, blinded, open randomised, controlled and cross-over trial.

    PubMed

    Riebe, Helene; Konschake, Wolfgang; Haase, Hermann; Jünger, Michael

    2018-02-01

    Background The therapeutic effectiveness of compression therapy depends on the selection of compression hosiery. Objectives To assess efficacy and tolerability of graduated elastic compression stockings (GECS) and inverse graduated elastic compression stockings (PECS). Methods Thirty-two healthy volunteers and thirty-two patients with chronic venous insufficiency were analysed; wear period: one week for each stocking type (randomised, blinded). volume reduction of 'Lower leg' (Image3D®) and 'Distal leg and foot' (water plethysmography). clinical symptoms of chronic venous insufficiency assessed by the Venous Clinical Severity Score, side effects and wear comfort in both groups. Results Volume of 'Lower leg': significant reduction in healthy volunteers (mean GECS: -37.5 mL, mean PECS: -37.2 mL) and in patients (mean GECS: -55.6 mL, mean PECS: -41.6 mL). Volume of 'Distal lower leg and foot': significant reduction in healthy volunteers (mean GECS: -27 mL, mean PECS: -16.7 mL), significant reduction in patients by GECS (mean: -43.4 mL), but non-significant reduction by PECS (mean: -22.6 mL). Clinical symptoms of chronic venous insufficiency were improved significantly better with GECS than with PECS, p < 0.001. GECS led to more painful constrictions, p = 0.047, PECS slipped down more often, p < 0.001. Conclusion GECS and PECS reduce volume of the segment 'Lower leg' in patients and healthy volunteers. Patients' volume of the 'Distal lower leg and foot', however, were diminished significantly only by GECS ( p = 0.0001). Patients' complaints were improved by both GECS and PECS, and GECS were superior to PECS.

  6. Entanglement-assisted quantum convolutional coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  7. Effect of Compression Garments on Physiological Responses After Uphill Running.

    PubMed

    Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc

    2018-03-01

    Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  8. Note: The performance of new density functionals for a recent blind test of non-covalent interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin

    Benchmark datasets of non-covalent interactions are essential for assessing the performance of density functionals and other quantum chemistry approaches. In a recent blind test, Taylor et al. benchmarked 14 methods on a new dataset consisting of 10 dimer potential energy curves calculated using coupled cluster with singles, doubles, and perturbative triples (CCSD(T)) at the complete basis set (CBS) limit (80 data points in total). Finally, the dataset is particularly interesting because compressed, near-equilibrium, and stretched regions of the potential energy surface are extensively sampled.

  9. Note: The performance of new density functionals for a recent blind test of non-covalent interactions

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2016-11-09

    Benchmark datasets of non-covalent interactions are essential for assessing the performance of density functionals and other quantum chemistry approaches. In a recent blind test, Taylor et al. benchmarked 14 methods on a new dataset consisting of 10 dimer potential energy curves calculated using coupled cluster with singles, doubles, and perturbative triples (CCSD(T)) at the complete basis set (CBS) limit (80 data points in total). Finally, the dataset is particularly interesting because compressed, near-equilibrium, and stretched regions of the potential energy surface are extensively sampled.

  10. Compressed air injection technique to standardize block injection pressures : [La technique d'injection d'air comprimé pour normaliser les pressions d'injection d'un blocage nerveux].

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes ( 18G, 20G, 21 G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed. Présentement, aucune technique normalisée ne permet de vérifier les pressions d'injection pendant les blocages nerveux périphériques. Nous voulions vérifier si une technique d'injection d'air comprimé, utilisant un modèle in vitro fondé sur la loi de Boyle et du matériel propre à l'anesthésie régionale, pouvait maintenir avec régularité les pressions d'injection sous les 1293 mmHg, pression associée à une lésion nerveuse cliniquement significative. MéTHODE: Les pressions d'injection pour des seringues de 20 et 30 mL et diverses tailles d'aiguilles (18G, 20G, 21G, 22G et 24G) ont été mesurées dans un système fermé. Un volume défini d'air a été aspiré dans une seringue rempli de solution saline, puis comprimé et maintenu à des pourcentages variés pendant la mesure de la pression. L'aiguille a été insérée dans l'ouverture à injection d'un détecteur de pression muni d'une extension avec un bouchon d'injection en position fermée. La valeur de la pression et l'intervalle de confiance de 99 % (IC) pour une compression d'air à 50 % ont été évalués en utilisant une régression linéaire avec tous les points de données. RéSULTATS: La linéarité de la loi de Boyle a été démontrée avec une forte corrélation, r = 0,99 et une pente de 0,984 (IC de 99 % : 0,967-1,001) La pression nette générée sous une compression de 50% a été de 744,8 mmHg avec un IC de 99 % entre 729,6 et 760,0 mmHg. Les diverses combinaisons de seringues et d'aiguilles ont présenté des résultats similaires. En créant et en maintenant dans la seringue une compression d'air à 50% ou moins, les pressions d'injection seront dans l'ensemble sous le seuil des 1293 mmHg associé à un facteur de risque de lésion nerveuse cliniquement significative. Cette technique peut permettre une surveillance simple, objective et en temps réel pendant les injections d'anesthésiques locaux tout en réduisant fondamentalement la vitesse d'injection.

  11. New Physical Constraints for Multi-Frame Blind Deconvolution

    DTIC Science & Technology

    2014-12-10

    Laboratory) Dr. Julian Christou (Large Binocular Telescope Observatory) REAL ACADEMIA DE CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115... CIENCIAS Y ARTES DE BARCELONA RAMBLA DE LOS ESTUDIOS 115 BARCELONA, 08002 SPAIN 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING

  12. Clinical assessment of heart chamber size and valve motion during cardiopulmonary resuscitation by two-dimensional echocardiography.

    PubMed

    Rich, S; Wix, H L; Shapiro, E P

    1981-09-01

    It has been generally accepted that enhanced blood flow during closed-chest CPR is generated from compression of the heart between the sternum and the spine. To visualize the heart during closed-chest massage, we performed two-dimensional echocardiography (2DE) during resuscitation efforts in four patients who had cardiac arrest. 2DE analysis showed that (1) the LV internal dimensions did not change appreciably with chest compression; (2) the mitral and aortic valves were open simultaneously during the compression phase; (3) blood flow into the right heart, as evidenced by saline bubble contrast, occurred during the relaxation phase; and (4) compression of the right ventricle and LA occurred in varying amounts in all patients. We conclude that stroke volume from the heart during CPR does not result from compression of the LV. Rather, CPR-induced improved cardiocirculatory dynamics appear to be principally the result of changes in intrathoracic pressure created by sternal compression.

  13. Multisource Transfer Learning With Convolutional Neural Networks for Lung Pattern Analysis.

    PubMed

    Christodoulidis, Stergios; Anthimopoulos, Marios; Ebner, Lukas; Christe, Andreas; Mougiakakou, Stavroula

    2017-01-01

    Early diagnosis of interstitial lung diseases is crucial for their treatment, but even experienced physicians find it difficult, as their clinical manifestations are similar. In order to assist with the diagnosis, computer-aided diagnosis systems have been developed. These commonly rely on a fixed scale classifier that scans CT images, recognizes textural lung patterns, and generates a map of pathologies. In a previous study, we proposed a method for classifying lung tissue patterns using a deep convolutional neural network (CNN), with an architecture designed for the specific problem. In this study, we present an improved method for training the proposed network by transferring knowledge from the similar domain of general texture classification. Six publicly available texture databases are used to pretrain networks with the proposed architecture, which are then fine-tuned on the lung tissue data. The resulting CNNs are combined in an ensemble and their fused knowledge is compressed back to a network with the original architecture. The proposed approach resulted in an absolute increase of about 2% in the performance of the proposed CNN. The results demonstrate the potential of transfer learning in the field of medical image analysis, indicate the textural nature of the problem and show that the method used for training a network can be as important as designing its architecture.

  14. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  15. Seismic body wave separation in volcano-tectonic activity inferred by the Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, Paolo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona

    2015-04-01

    One of the main challenge in volcano-seismological literature is to locate and characterize the source of volcano/tectonic seismic activity. This passes through the identification at least of the onset of the main phases, i.e. the body waves. Many efforts have been made to solve the problem of a clear separation of P and S phases both from a theoretical point of view and developing numerical algorithms suitable for specific cases (see, e.g., Küperkoch et al., 2012). Recently, a robust automatic procedure has been implemented for extracting the prominent seismic waveforms from continuously recorded signals and thus allowing for picking the main phases. The intuitive notion of maximum non-gaussianity is achieved adopting techniques which involve higher-order statistics in frequency domain., i.e, the Convolutive Independent Component Analysis (CICA). This technique is successful in the case of the blind source separation of convolutive mixtures. In seismological framework, indeed, seismic signals are thought as the convolution of a source function with path, site and the instrument response. In addition, time-delayed versions of the same source exist, due to multipath propagation typically caused by reverberations from some obstacle. In this work, we focus on the Volcano Tectonic (VT) activity at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al., 2011). The activity was characterized approximately by 300 low-magnitude VT earthquakes (Md < 2; for the definition of duration magnitude, see Petrosino et al. 2008). Most of them were concentrated in distinct seismic sequences with hypocenters mainly clustered beneath the Solfatara-Accademia area, at depths ranging between 1 and 4 km b.s.l.. The obtained results show the clear separation of P and S phases: the technique not only allows the identification of the S-P time delay giving the timing of both phases but also provides the independent waveforms of the P and S phases. This is an enormous advantage for all the problems related to the source inversion and location In addition, the VT seismicity was accompanied by hundreds of LP events (characterized by spectral peaks in the 0.5-2-Hz frequency band) that were concentrated in a 7-day interval. The main interest is to establish whether the occurrence of LPs is only limited to the swarm that reached a climax on days 26-28 October as indicated by Saccorotti et al. (2007), or a longer period is experienced. The automatically extracted waveforms with improved signal-to-noise ratio via CICA coupled with automatic phase picking allowed to compile a more complete seismic catalog and to better quantify the seismic energy release including the presence of LP events from the beginning of October until mid of November. Finally, a further check of the volcanic nature of extracted signals is achieved by looking at the seismological properties and the content of entropy held in the traces (Falanga and Petrosino 2012; De Lauro et al., 2012). Our results allow us to move towards a full description of the complexity of the source, which can be used for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise

  16. Analysis of blind identification methods for estimation of kinetic parameters in dynamic medical imaging

    NASA Astrophysics Data System (ADS)

    Riabkov, Dmitri

    Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.

  17. Photon Counting Computed Tomography With Dedicated Sharp Convolution Kernels: Tapping the Potential of a New Technology for Stent Imaging.

    PubMed

    von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem

    2018-05-23

    The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.

  18. Compression médullaire d'origine métastatique

    PubMed Central

    Bouhafa, Touria; Elmazghi, Abderrahman; Masbah, Ouafae; Hassouni, Khalid

    2014-01-01

    La compression médullaire d'origine métastatique est une complication neurologique fréquente du cancer. C'est une urgence diagnostique et thérapeutique qui nécessite une prise en charge rapide et efficace. L'imagerie par résonnance magnétique (IRM) constitue l'examen de choix pour l'exploration de l'ensemble de la moelle. La prise en charge thérapeutique doit être multidisciplinaire incluant la corticothérapie, la radiothérapie et la chirurgie. PMID:25829974

  19. Tweaked residual convolutional network for face alignment

    NASA Astrophysics Data System (ADS)

    Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu

    2017-08-01

    We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.

  20. DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

    PubMed

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2017-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.

  1. Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks

    PubMed Central

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2018-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980

  2. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    PubMed

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  3. Apport de l’IRM dans la prise en charge des compressions médullaires lentes non traumatiques

    PubMed Central

    Badji, Nfally; Deme, Hamidou; Akpo, Geraud; Ndong, Boucar; Toure, Mouhamadou Hamine; Diop, Sokhna Ba; Niang, El Hadji

    2016-01-01

    Les compressions médullaires lentes sont dues au développement dans le canal médullaire d’une lésion expansive. C’est une pathologie très fréquente dont le diagnostic est essentiellement clinique. L’imagerie par résonnance magnétique occupe une place incontournable dans le diagnostic de localisation et la recherche étiologique. En Europe l’étiologie tumorale est prépondérante. Le but de cette étude était de décrire les aspects IRM des compressions médullaires lentes et de déterminer le profil étiologique. Il s’agit d’une étude rétrospective portant sur 97 observations colligées au service de radiologie du CHUN de Fann sur une période de 30 mois (du 08/03/10 au 29/09/12). On été inclus dans l’étude, tous les patients adressés pour un tableau de compression médullaire lente survenu dans un contexte non traumatique. L’âge moyen des patients était de 42,6 ans avec des extrêmes compris entre 04 mois et 85 ans. Nous avons étudié la topographie des lésions (étage rachidien, compartiments canalaires) leur rehaussement et les critères d’orientation étiologique. Le protocole d’examen permettait la réalisation de séquence pondérées T1 sans avec injection de gado, T2, STIR et T2 DRIVE centrées sur les niveaux lésionnels ou les zones suspectes. L’IRM a permis de préciser le siège exact et l’étendue des lésions. L’atteinte du rachis dorsal représentait 42% des cas, suivi du rachis cervical avec 32% des cas. Les atteintes lombo-sacrées et pluri-étagées représentaient respectivement 18% et 08% des cas. Les lésions extradurales représentaient 87% des cas, suivi des lésions intradurales extramédullaires avec 08% des cas et des lésions intramédullaires dans 05% des cas. La particularité du profil étiologique de notre étude est la prédominance des épidurites infectieuses et la fréquence relative des épidurites métastatiques comparée aux séries occidentales. L’IRM vertébro-médullaire occupe une place capitale dans le diagnostic positif, topographique et étiologique des compressions médullaires. PMID:27800076

  4. Optimizing the Galileo space communication link

    NASA Technical Reports Server (NTRS)

    Statman, J. I.

    1994-01-01

    The Galileo mission was originally designed to investigate Jupiter and its moons utilizing a high-rate, X-band (8415 MHz) communication downlink with a maximum rate of 134.4 kb/sec. However, following the failure of the high-gain antenna (HGA) to fully deploy, a completely new communication link design was established that is based on Galileo's S-band (2295 MHz), low-gain antenna (LGA). The new link relies on data compression, local and intercontinental arraying of antennas, a (14,1/4) convolutional code, a (255,M) variable-redundancy Reed-Solomon code, decoding feedback, and techniques to reprocess recorded data to greatly reduce data losses during signal acquisition. The combination of these techniques will enable return of significant science data from the mission.

  5. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  6. Data-dependent bucketing improves reference-free compression of sequencing reads.

    PubMed

    Patro, Rob; Kingsford, Carl

    2015-09-01

    The storage and transmission of high-throughput sequencing data consumes significant resources. As our capacity to produce such data continues to increase, this burden will only grow. One approach to reduce storage and transmission requirements is to compress this sequencing data. We present a novel technique to boost the compression of sequencing that is based on the concept of bucketing similar reads so that they appear nearby in the file. We demonstrate that, by adopting a data-dependent bucketing scheme and employing a number of encoding ideas, we can achieve substantially better compression ratios than existing de novo sequence compression tools, including other bucketing and reordering schemes. Our method, Mince, achieves up to a 45% reduction in file sizes (28% on average) compared with existing state-of-the-art de novo compression schemes. Mince is written in C++11, is open source and has been made available under the GPLv3 license. It is available at http://www.cs.cmu.edu/∼ckingsf/software/mince. carlk@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  7. Instructions to "push as hard as you can" improve average chest compression depth in dispatcher-assisted cardiopulmonary resuscitation.

    PubMed

    Mirza, Muzna; Brown, Todd B; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S

    2008-10-01

    Cardiopulmonary resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to "push as hard as you can" in the simplified protocol, compared to "push down firmly 2in. (5cm)" in MPDS. Data were recorded via a Laerdal ResusciAnne SkillReporter manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Instructions to "push as hard as you can", compared to "push down firmly 2in. (5cm)", resulted in improved chest compression depth (36.4 mm vs. 29.7 mm, p<0.0001), and improved median proportion of chest compressions done to the correct depth (32% vs. <1%, p<0.0001). No significant difference in median proportion of compressions with total release (100% for both) and average compression rate (99.7 min(-1) vs. 97.5 min(-1), p<0.56) was found. Modifying dispatcher-assisted CPR instructions by changing "push down firmly 2in. (5cm)" to "push as hard as you can" achieved improvement in chest compression depth at no cost to total release or average chest compression rate.

  8. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  9. A novel algorithm to detect glaucoma risk using texton and local configuration pattern features extracted from fundus images.

    PubMed

    Acharya, U Rajendra; Bhat, Shreya; Koh, Joel E W; Bhandary, Sulatha V; Adeli, Hojjat

    2017-09-01

    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Histological and three dimensional organizations of lymphoid tubules in normal lymphoid organ of Penaeus monodon.

    PubMed

    Duangsuwan, Pornsawan; Phoungpetchara, Ittipon; Tinikul, Yotsawan; Poljaroen, Jaruwan; Wanichanon, Chaitip; Sobhon, Prasert

    2008-04-01

    The normal lymphoid organ of Penaeus monodon (which tested negative for WSSV and YHV) was composed of two parts: lymphoid tubules and interstitial spaces, which were permeated with haemal sinuses filled with large numbers of haemocytes. There were three permanent types of cells present in the wall of lymphoid tubules: endothelial, stromal and capsular cells. Haemocytes penetrated the endothelium of the lymphoid tubule's wall to reside among the fixed cells. The outermost layer of the lymphoid tubule was covered by a network of fibers embedded in a PAS-positive extracellular matrix, which corresponded to a basket-like network that covered all the lymphoid tubules as visualized by a scanning electron microscope (SEM). Argyrophilic reticular fibers surrounded haemal sinuses and lymphoid tubules. Together they formed the scaffold that supported the lymphoid tubule. Using vascular cast and SEM, the three dimensional structure of the subgastric artery that supplies each lobe of the lymphoid organ was reconstructed. This artery branched into highly convoluted and blind-ending terminal capillaries, each forming the lumen of a lymphoid tubule around which haemocytes and other cells aggregated to form a cuff-like wall. Stromal cells which form part of the tubular scaffold were immunostained for vimentin. Examination of the whole-mounted lymphoid organ, immunostained for vimentin, by confocal microscopy exhibited the highly branching and convoluted lymphoid tubules matching the pattern of the vascular cast observed in SEM.

  11. Comparison of blind intubation through the I-gel and ILMA Fastrach by nurses during cardiopulmonary resuscitation: a manikin study.

    PubMed

    Melissopoulou, Theodora; Stroumpoulis, Konstantinos; Sampanis, Michail A; Vrachnis, Nikolaos; Papadopoulos, Georgios; Chalkias, Athanasios; Xanthos, Theodoros

    2014-01-01

    To investigate whether nursing staff can successfully use the I-gel and the intubating laryngeal mask Fastrach (ILMA) during cardiopulmonary resuscitation. Although tracheal intubation is considered to be the optimal method for securing the airway during cardiopulmonary resuscitation, laryngoscopy requires a high level of skill. Forty five nurses inserted the I-gel and the ILMA in a manikin, with continuous and without chest compressions. Mean intubation times for the ILMA and I-gel without chest compressions were 20.60 ± 3.27 and 18.40 ± 3.26 s, respectively (p < 0.0005). ILMA proved more successful than the I-gel regardless of compressions. Continuation of compressions caused a prolongation in intubation times for both the I-gel (p < 0.0005) and the ILMA (p < 0.0005). In this mannequin study, nursing staff can successfully intubate using the I-gel and the ILMA as conduits with comparable success rates, regardless of whether chest compressions are interrupted or not. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  13. Blindness associated with nasal/paranasal lymphoma in a stallion.

    PubMed

    Sano, Yuto; Okamoto, Minoru; Ootsuka, Youhei; Matsuda, Kazuya; Yusa, Shigeki; Taniyama, Hiroyuki

    2017-03-23

    A 29-year-old stallion presented with bilateral blindness following the chronic purulent nasal drainage. The mass occupied the right caudal nasal cavity and right paranasal sinuses including maxillary, palatine and sphenoidal sinuses, and the right-side turbinal and paranasal septal bones, and cribriform plate of ethmoid bone were destructively replaced by the mass growth. The right optic nerve was invaded and involved by the mass, and the left optic nerve and optic chiasm were compressed by the mass which was extended and invaded the skull base. Histologically, the optic nerves and optic chiasm were degenerated, and the mass was diagnosed as lymphoma which was morphologically and immunohistochemically classified as a diffuse large B-cell lymphoma. Based on these findings, the cause of the blindness in the stallion was concluded to be due to the degeneration of the optic nerves and chiasm associated with lymphoma occurring in the nasal and paranasal cavities. To the best of our knowledge, this is the first report of the equine blindness with optic nerve degeneration accompanied by lymphoma.

  14. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.

  15. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed Central

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for “reading” texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the “bottleneck” for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition. PMID:23966968

  16. High-grade video compression of echocardiographic studies: a multicenter validation study of selected motion pictures expert groups (MPEG)-4 algorithms.

    PubMed

    Barbier, Paolo; Alimento, Marina; Berna, Giovanni; Celeste, Fabrizio; Gentile, Francesco; Mantero, Antonio; Montericcio, Vincenzo; Muratori, Manuela

    2007-05-01

    Large files produced by standard compression algorithms slow down spread of digital and tele-echocardiography. We validated echocardiographic video high-grade compression with the new Motion Pictures Expert Groups (MPEG)-4 algorithms with a multicenter study. Seven expert cardiologists blindly scored (5-point scale) 165 uncompressed and compressed 2-dimensional and color Doppler video clips, based on combined diagnostic content and image quality (uncompressed files as references). One digital video and 3 MPEG-4 algorithms (WM9, MV2, and DivX) were used, the latter at 3 compression levels (0%, 35%, and 60%). Compressed file sizes decreased from 12 to 83 MB to 0.03 to 2.3 MB (1:1051-1:26 reduction ratios). Mean SD of differences was 0.81 for intraobserver variability (uncompressed and digital video files). Compared with uncompressed files, only the DivX mean score at 35% (P = .04) and 60% (P = .001) compression was significantly reduced. At subcategory analysis, these differences were still significant for gray-scale and fundamental imaging but not for color or second harmonic tissue imaging. Original image quality, session sequence, compression grade, and bitrate were all independent determinants of mean score. Our study supports use of MPEG-4 algorithms to greatly reduce echocardiographic file sizes, thus facilitating archiving and transmission. Quality evaluation studies should account for the many independent variables that affect image quality grading.

  17. Powerline noise elimination in biomedical signals via blind source separation and wavelet analysis.

    PubMed

    Akwei-Sekyere, Samuel

    2015-01-01

    The distortion of biomedical signals by powerline noise from recording biomedical devices has the potential to reduce the quality and convolute the interpretations of the data. Usually, powerline noise in biomedical recordings are extinguished via band-stop filters. However, due to the instability of biomedical signals, the distribution of signals filtered out may not be centered at 50/60 Hz. As a result, self-correction methods are needed to optimize the performance of these filters. Since powerline noise is additive in nature, it is intuitive to model powerline noise in a raw recording and subtract it from the raw data in order to obtain a relatively clean signal. This paper proposes a method that utilizes this approach by decomposing the recorded signal and extracting powerline noise via blind source separation and wavelet analysis. The performance of this algorithm was compared with that of a 4th order band-stop Butterworth filter, empirical mode decomposition, independent component analysis and, a combination of empirical mode decomposition with independent component analysis. The proposed method was able to expel sinusoidal signals within powerline noise frequency range with higher fidelity in comparison with the mentioned techniques, especially at low signal-to-noise ratio.

  18. Blind image fusion for hyperspectral imaging with the directional total variation

    NASA Astrophysics Data System (ADS)

    Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane

    2018-04-01

    Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.

  19. Improving energy efficiency in handheld biometric applications

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.

    2012-06-01

    With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.

  20. Battage de pieux métalliques dans la roche

    NASA Astrophysics Data System (ADS)

    Holeyman, Alain

    2017-12-01

    Le problème du battage d'un pieu tubulaire en acier dans un massif rocheux est abordé sous deux volets : numérique et expérimental. L'approche numérique par éléments finis couplant une modélisation lagrangienne du pieu en mouvement à une modélisation eulérienne du massif en place (approche CEL) permet de suivre l'émergence de la plastification dans le tube en acier en cours de pénétration, naissant en pointe pour se propager vers la tête du pieu. Un cas d'étude en conditions axisymétriques est présenté pour une roche dont la résistance à la compression simple σc vaut 28 MPa. L'approche expérimentale poursuivie en laboratoire a mis en jeu trois matériaux synthétisés sous la forme de monolithes dans lesquels le battage d'un tube en acier inoxydable a été entrepris. Les résultats de ces essais largement instrumentés indiquent que le battage est aisé dans un mortier cellulaire dont la résistance à la compression simple σc n'excède pas 6 MPa mais pratiquement impossible dans un mortier dont la résistance à la compression simple s'approche de 28 MPa. Le battage dans un mortier dont la résistance à la compression simple vaut 11 MPa s'est révélée encore faisable sous une hauteur de chute raisonnable. Les résultats obtenus avec les 3 matériaux mis en œuvre à ce jour indiquent que la résistance unitaire croît depuis environ 2 σc en surface pour atteindre 6 à 8 σc à une pénétration équivalente à 15-20 fois l'épaisseur du tube. Article introduit à la Revue française de Géotechnique en support à la Conférence Coulomb 2017 intitulée « Comportement axial des pieux sous sollicitations dynamiques extrêmes »

  1. A double-blind, randomized, comparative study of the use of a combination of uridine triphosphate trisodium, cytidine monophosphate disodium, and hydroxocobalamin, versus isolated treatment with hydroxocobalamin, in patients presenting with compressive neuralgias.

    PubMed

    Goldberg, Henrique; Mibielli, Marco Antonio; Nunes, Carlos Pereira; Goldberg, Stephanie Wrobel; Buchman, Luiz; Mezitis, Spyros Ge; Rzetelna, Helio; Oliveira, Lisa; Geller, Mauro; Wajnsztajn, Fernanda

    2017-01-01

    This paper reports on the results of treatment of compressive neuralgia using a combination of nucleotides (uridine triphosphate trisodium [UTP] and cytidine monophosphate disodium [CMP]) and vitamin B 12 . To assess the safety and efficacy of the combination of nucleotides (UTP and CMP) and vitamin B 12 in patients presenting with neuralgia arising from neural compression associated with degenerative orthopedic alterations and trauma, and to compare these effects with isolated administration of vitamin B 12 . A randomized, double-blind, controlled trial, consisting of a 30-day oral treatment period: Group A (n=200) receiving nucleotides + vitamin B 12, and Group B (n=200) receiving vitamin B 12 alone. The primary study endpoint was the percentage of subjects presenting pain visual analog scale (VAS) scores ≤20 at end of study treatment period. Secondary study endpoints included the percentage of subjects presenting improvement ≥5 points on the patient functionality questionnaire (PFQ); percentage of subjects presenting pain reduction (reduction in VAS scores at study end in relation to pretreatment); and number of subjects presenting adverse events. The results of this study showed a more expressive improvement in efficacy evaluations among subjects treated with the combination of nucleotides + vitamin B 12 , with a statistically significant superiority of the combination in pain reduction (evidenced by VAS scores). There were adverse events in both treatment groups, but these were transitory and no severe adverse event was recorded during the study period. Safety parameters were maintained throughout the study in both treatment groups. The combination of uridine, cytidine, and vitamin B 12 was safe and effective in the treatment of neuralgias arising from neural compression associated with degenerative orthopedic alterations and trauma.

  2. Role of mechanical factors in cortical folding development

    NASA Astrophysics Data System (ADS)

    Razavi, Mir Jalil; Zhang, Tuo; Li, Xiao; Liu, Tianming; Wang, Xianqiao

    2015-09-01

    Deciphering mysteries of the structure-function relationship in cortical folding has emerged as the cynosure of recent research on brain. Understanding the mechanism of convolution patterns can provide useful insight into the normal and pathological brain function. However, despite decades of speculation and endeavors the underlying mechanism of the brain folding process remains poorly understood. This paper focuses on the three-dimensional morphological patterns of a developing brain under different tissue specification assumptions via theoretical analyses, computational modeling, and experiment verifications. The living human brain is modeled with a soft structure having outer cortex and inner core to investigate the brain development. Analytical interpretations of differential growth of the brain model provide preliminary insight into the critical growth ratio for instability and crease formation of the developing brain followed by computational modeling as a way to offer clues for brain's postbuckling morphology. Especially, tissue geometry, growth ratio, and material properties of the cortex are explored as the most determinant parameters to control the morphogenesis of a growing brain model. As indicated in results, compressive residual stresses caused by the sufficient growth trigger instability and the brain forms highly convoluted patterns wherein its gyrification degree is specified with the cortex thickness. Morphological patterns of the developing brain predicted from the computational modeling are consistent with our neuroimaging observations, thereby clarifying, in part, the reason of some classical malformation in a developing brain.

  3. Convolution neural-network-based detection of lung structures

    NASA Astrophysics Data System (ADS)

    Hasegawa, Akira; Lo, Shih-Chung B.; Freedman, Matthew T.; Mun, Seong K.

    1994-05-01

    Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.

  4. Lattice strain measurements on sandstones under load using neutron diffraction

    NASA Astrophysics Data System (ADS)

    Frischbutter, A.; Neov, D.; Scheffzük, Ch.; Vrána, M.; Walther, K.

    2000-11-01

    Neutron diffraction methods (both time-of-flight- and angle-dispersive diffraction) are applied to intracrystalline strain measurements on geological samples undergoing uniaxial increasing compressional load. The experiments were carried out on Cretaceous sandstones from the Elbezone (East Germany), consisting of >95% quartz which are bedded but without crystallographic preferred orientation of quartz. From the stress-strain relation the Young's modulus for our quartz sample was determined to be (72.2±2.9) GPa using results of the neutron time-of-flight method. The influence of different kinds of bedding in sandstones (laminated and convolute bedding) could be determined. We observed differences of factor 2 (convolute bedding) and 3 (laminated bedding) for the elastic stiffness, determined with angle dispersive neutron diffraction (crystallographic strain) and with strain gauges (mechanical strain). The data indicate which geological conditions may influence the stress-strain behaviour of geological materials. The influence of bedding on the stress-strain behaviour of a laminated bedded sandstone was indicated by direct residual stress measurements using neutron time-of-flight diffraction. The measurements were carried out six days after unloading the sample. Residual strain was measured for three positions from the centre to the periphery and within two radial directions of the cylinder. We observed that residual strain changes from extension to compression in a different manner for two perpendicular directions of the bedding plane.

  5. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  6. Instructions to “push as hard as you can” improve average chest compression depth in dispatcher-assisted Cardiopulmonary Resuscitation

    PubMed Central

    Mirza, Muzna; Brown, Todd B.; Saini, Devashish; Pepper, Tracy L; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S.

    2008-01-01

    Background and Objective Cardiopulmonary Resuscitation (CPR) with adequate chest compression depth appears to improve first shock success in cardiac arrest. We evaluate the effect of simplification of chest compression instructions on compression depth in dispatcher-assisted CPR protocol. Methods Data from two randomized, double-blinded, controlled trials with identical methodology were combined to obtain 332 records for this analysis. Subjects were randomized to either modified Medical Priority Dispatch System (MPDS) v11.2 protocol or a new simplified protocol. The main difference between the protocols was the instruction to “push as hard as you can” in the simplified protocol, compared to “push down firmly 2 inches (5cm)” in MPDS. Data were recorded via a Laerdal® ResusciAnne® SkillReporter™ manikin. Primary outcome measures included: chest compression depth, proportion of compressions without error, with adequate depth and with total release. Results Instructions to “push as hard as you can”, compared to “push down firmly 2 inches (5cm)”, resulted in improved chest compression depth (36.4 vs 29.7 mm, p<0.0001), and improved median proportion of chest compressions done to the correct depth (32% vs <1%, p<0.0001). No significant difference in median proportion of compressions with total release (100% for both) and average compression rate (99.7 vs 97.5 per min, p<0.56) was found. Conclusions Modifying dispatcher-assisted CPR instructions by changing “push down firmly 2 inches (5cm)” to “push as hard as you can” achieved improvement in chest compression depth at no cost to total release or average chest compression rate. PMID:18635306

  7. Predictive value of magnetic resonance for identifying neurovascular compressions in trigeminal neuralgia.

    PubMed

    Ruiz-Juretschke, F; Guzmán-de-Villoria, J G; García-Leal, R; Sañudo, J R

    2017-05-23

    Microvascular decompression (MVD) is accepted as the only aetiological surgical treatment for refractory classic trigeminal neuralgia (TN). There is therefore increasing interest in establishing the diagnostic and prognostic value of identifying neurovascular compressions (NVC) using preoperative high-resolution three-dimensional magnetic resonance (MRI) in patients with classic TN who are candidates for surgery. This observational study includes a series of 74 consecutive patients with classic TN treated with MVD. All patients underwent a preoperative three-dimensional high-resolution MRI with DRIVE sequences to diagnose presence of NVC, as well as the degree, cause, and location of compressions. MRI results were analysed by doctors blinded to surgical findings and subsequently compared to those findings. After a minimum follow-up time of six months, we assessed the surgical outcome and graded it on the Barrow Neurological Institute pain intensity score (BNI score). The prognostic value of the preoperative MRI was estimated using binary logistic regression. Preoperative DRIVE MRI sequences showed a sensitivity of 95% and a specificity of 87%, with a 98% positive predictive value and a 70% negative predictive value. Moreover, Cohen's kappa (CK) indicated a good level of agreement between radiological and surgical findings regarding presence of NVC (CK 0.75), type of compression (CK 0.74) and the site of compression (CK 0.72), with only moderate agreement as to the degree of compression (CK 0.48). After a mean follow-up of 29 months (range 6-100 months), 81% of the patients reported pain control with or without medication (BNI score i-iiiI). Patients with an excellent surgical outcome, i.e. without pain and off medication (BNI score i), made up 66% of the total at the end of follow-up. Univariate analysis using binary logistic regression showed that a diagnosis of NVC on the preoperative MRI was a favorable prognostic factor that significantly increased the odds of obtaining an excellent outcome (OR 0.17, 95% CI 0.04-0.72; P=.02) or an acceptable outcome (OR 0.16, 95% CI 0.04-0.68; P=.01) after MVD. DRIVE MRI shows high sensitivity and specificity for diagnosing NVC in patients with refractory classic TN and who are candidates for MVD. The finding of NVC on preoperative MRI is a good prognostic factor for long-term pain relief with MVD. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  9. The analysis of convolutional codes via the extended Smith algorithm

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Onyszchuk, I.

    1993-01-01

    Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.

  10. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  11. Chemical Shift Encoded Water–Fat Separation Using Parallel Imaging and Compressed Sensing

    PubMed Central

    Sharma, Samir D.; Hu, Houchun H.; Nayak, Krishna S.

    2013-01-01

    Chemical shift encoded techniques have received considerable attention recently because they can reliably separate water and fat in the presence of off-resonance. The insensitivity to off-resonance requires that data be acquired at multiple echo times, which increases the scan time as compared to a single echo acquisition. The increased scan time often requires that a compromise be made between the spatial resolution, the volume coverage, and the tolerance to artifacts from subject motion. This work describes a combined parallel imaging and compressed sensing approach for accelerated water–fat separation. In addition, the use of multiscale cubic B-splines for B0 field map estimation is introduced. The water and fat images and the B0 field map are estimated via an alternating minimization. Coil sensitivity information is derived from a calculated k-space convolution kernel and l1-regularization is imposed on the coil-combined water and fat image estimates. Uniform water–fat separation is demonstrated from retrospectively undersampled data in the liver, brachial plexus, ankle, and knee as well as from a prospectively undersampled acquisition of the knee at 8.6x acceleration. PMID:22505285

  12. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography.

    PubMed

    Xu, Daguang; Huang, Yong; Kang, Jin U

    2014-06-16

    We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).

  13. A computer program for estimating the power-density spectrum of advanced continuous simulation language generated time histories

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1981-01-01

    A computer program for performing frequency analysis of time history data is presented. The program uses circular convolution and the fast Fourier transform to calculate power density spectrum (PDS) of time history data. The program interfaces with the advanced continuous simulation language (ACSL) so that a frequency analysis may be performed on ACSL generated simulation variables. An example of the calculation of the PDS of a Van de Pol oscillator is presented.

  14. Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.

    PubMed

    Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin

    2018-06-22

    Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.

  15. Optimized heat exchange in a CO2 de-sublimation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, Larry; Terrien, Paul; Tessier, Pascal

    The present invention is a process for removing carbon dioxide from a compressed gas stream including cooling the compressed gas in a first heat exchanger, introducing the cooled gas into a de-sublimating heat exchanger, thereby producing a first solid carbon dioxide stream and a first carbon dioxide poor gas stream, expanding the carbon dioxide poor gas stream, thereby producing a second solid carbon dioxide stream and a second carbon dioxide poor gas stream, combining the first solid carbon dioxide stream and the second solid carbon dioxide stream, thereby producing a combined solid carbon dioxide stream, and indirectly exchanging heat betweenmore » the combined solid carbon dioxide stream and the compressed gas in the first heat exchanger.« less

  16. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  17. Large Hiatal Hernia Compressing the Heart.

    PubMed

    Matar, Andrew; Mroue, Jad; Camporesi, Enrico; Mangar, Devanand; Albrink, Michael

    2016-02-01

    We describe a 41-year-old man with De Mosier's syndrome who presented with exercise intolerance and dyspnea on exertion caused by a giant hiatal hernia compressing the heart with relief by surgical treatment. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Cardiopulmonary resuscitation using the cardio vent device in a resuscitation model.

    PubMed

    Suner, Selim; Jay, Gregory D; Kleinman, Gary J; Woolard, Robert H; Jagminas, Liudvikas; Becker, Bruce M

    2002-05-01

    To compare the "Bellows on Sternum Resuscitation" (BSR) device that permits simultaneous compression and ventilation by one rescuer with two person cardiopulmonary resuscitation (CPR) with bag-valve-mask (BVM) ventilation in a single blind crossover study performed in the laboratory setting. Tidal volume and compression depth were recorded continuously during 12-min CPR sessions with the BSR device and two person CPR. Six CPR instructors performed a total of 1,894 ventilations and 10,532 compressions in 3 separate 12-min sessions. Mean tidal volume (MTV) and compression rate (CR) with the BSR device differed significantly from CPR with the BVM group (1242 mL vs. 1065 mL, respectively, p = 0.0018 and 63.2 compressions per minute (cpm) vs. 81.3 cpm, respectively, p = 0.0076). Error in compression depth (ECD) rate of 9.78% was observed with the BSR device compared to 8.49% with BMV CPR (p = 0.1815). Error rate was significantly greater during the second half of CPR sessions for both BSR and BVM groups. It is concluded that one-person CPR with the BSR device is equivalent to two-person CPR with BVM in all measured parameters except for CR. Both groups exhibited greater error rate in CPR performance in the latter half of 12-min CPR sessions.

  19. Suspected ivermectin toxicosis in a miniature mule foal causing blindness.

    PubMed

    Plummer, Caryn E; Kallberg, Maria E; Ollivier, Franck J; Brooks, Dennis E; Gelatt, Kirk N

    2006-01-01

    A 9-week-old miniature mule foal presented to the Veterinary Medical Teaching Hospital for acute blindness, ataxia, and depression following an overdose of an over-the-counter ivermectin-based de-worming medication. Ophthalmic examination and electrodiagnostic evaluation eliminated outer retinal abnormalities as the primary cause of the bilateral blindness, implicating instead a central neurologic effect of the drug. With symptomatic and supportive care, the foal recovered fully and regained its vision.

  20. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  1. Performance of Serially Concatenated Convolutional Codes with Binary Modulation in AWGN and Noise Jamming over Rayleigh Fading Channels

    DTIC Science & Technology

    2001-09-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE...ABSTRACT In this dissertation, the bit error rates for serially concatenated convolutional codes (SCCC) for both BPSK and DPSK modulation with...INTENTIONALLY LEFT BLANK i EXECUTIVE SUMMARY In this dissertation, the bit error rates of serially concatenated convolutional codes

  2. DSN telemetry system performance with convolutionally coded data using operational maximum-likelihood convolutional decoders

    NASA Technical Reports Server (NTRS)

    Benjauthrit, B.; Mulhall, B.; Madsen, B. D.; Alberda, M. E.

    1976-01-01

    The DSN telemetry system performance with convolutionally coded data using the operational maximum-likelihood convolutional decoder (MCD) being implemented in the Network is described. Data rates from 80 bps to 115.2 kbps and both S- and X-band receivers are reported. The results of both one- and two-way radio losses are included.

  3. Two-thumb technique is superior to two-finger technique during lone rescuer infant manikin CPR.

    PubMed

    Udassi, Sharda; Udassi, Jai P; Lamb, Melissa A; Theriaque, Douglas W; Shuster, Jonathan J; Zaritsky, Arno L; Haque, Ikram U

    2010-06-01

    Infant CPR guidelines recommend two-finger chest compression with a lone rescuer and two-thumb with two rescuers. Two-thumb provides better chest compression but is perceived to be associated with increased ventilation hands-off time. We hypothesized that lone rescuer two-thumb CPR is associated with increased ventilation cycle time, decreased ventilation quality and fewer chest compressions compared to two-finger CPR in an infant manikin model. Crossover observational study randomizing 34 healthcare providers to perform 2 min CPR at a compression rate of 100 min(-1) using a 30:2 compression:ventilation ratio comparing two-thumb vs. two-finger techniques. A Laerdal Baby ALS Trainer manikin was modified to digitally record compression rate, compression depth and compression pressure and ventilation cycle time (two mouth-to-mouth breaths). Manikin chest rise with breaths was video recorded and later reviewed by two blinded CPR instructors for percent effective breaths. Data (mean+/-SD) were analyzed using a two-tailed paired t-test. Significance was defined qualitatively as p< or =0.05. Mean % effective breaths were 90+/-18.6% in two-thumb and 88.9+/-21.1% in two-finger, p=0.65. Mean time (s) to deliver two mouth-to-mouth breaths was 7.6+/-1.6 in two-thumb and 7.0+/-1.5 in two-finger, p<0.0001. Mean delivered compressions per minute were 87+/-11 in two-thumb and 92+/-12 in two-finger, p=0.0005. Two-thumb resulted in significantly higher compression depth and compression pressure compared to the two-finger technique. Healthcare providers required 0.6s longer time to deliver two breaths during two-thumb lone rescuer infant CPR, but there was no significant difference in percent effective breaths delivered between the two techniques. Two-thumb CPR had 4 fewer delivered compressions per minute, which may be offset by far more effective compression depth and compression pressure compared to two-finger technique. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Adaptive recovery of motion blur point spread function from differently exposed images

    NASA Astrophysics Data System (ADS)

    Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian

    2010-01-01

    Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.

  5. Enhanced online convolutional neural networks for object tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen

    2018-04-01

    In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.

  6. Discovery of CLC transport proteins: cloning, structure, function and pathophysiology

    PubMed Central

    Jentsch, Thomas J

    2015-01-01

    Abstract After providing a personal description of the convoluted path leading 25 years ago to the molecular identification of the Torpedo Cl− channel ClC-0 and the discovery of the CLC gene family, I succinctly describe the general structural and functional features of these ion transporters before giving a short overview of mammalian CLCs. These can be categorized into plasma membrane Cl− channels and vesicular Cl−/H+-exchangers. They are involved in the regulation of membrane excitability, transepithelial transport, extracellular ion homeostasis, endocytosis and lysosomal function. Diseases caused by CLC dysfunction include myotonia, neurodegeneration, deafness, blindness, leukodystrophy, male infertility, renal salt loss, kidney stones and osteopetrosis, revealing a surprisingly broad spectrum of biological roles for chloride transport that was unsuspected when I set out to clone the first voltage-gated chloride channel. PMID:25590607

  7. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model.

    PubMed

    Wang, Sheng; Sun, Siqi; Li, Zhen; Zhang, Renyu; Xu, Jinbo

    2017-01-01

    Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. http://raptorx.uchicago.edu/ContactMap/.

  8. Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model

    PubMed Central

    Li, Zhen; Zhang, Renyu

    2017-01-01

    Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact-assisted models also have much better quality than template-based models especially for membrane proteins. The 3D models built from our contact prediction have TMscore>0.5 for 208 of the 398 membrane proteins, while those from homology modeling have TMscore>0.5 for only 10 of them. Further, even if trained mostly by soluble proteins, our deep learning method works very well on membrane proteins. In the recent blind CAMEO benchmark, our fully-automated web server implementing this method successfully folded 6 targets with a new fold and only 0.3L-2.3L effective sequence homologs, including one β protein of 182 residues, one α+β protein of 125 residues, one α protein of 140 residues, one α protein of 217 residues, one α/β of 260 residues and one α protein of 462 residues. Our method also achieved the highest F1 score on free-modeling targets in the latest CASP (Critical Assessment of Structure Prediction), although it was not fully implemented back then. Availability http://raptorx.uchicago.edu/ContactMap/ PMID:28056090

  9. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  10. Experimental Investigation of Convoluted Contouring for Aircraft Afterbody Drag Reduction

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Hunter, Craig A.

    1999-01-01

    An experimental investigation was performed in the NASA Langley 16-Foot Transonic Tunnel to determine the aerodynamic effects of external convolutions, placed on the boattail of a nonaxisymmetric nozzle for drag reduction. Boattail angles of 15 and 22 were tested with convolutions placed at a forward location upstream of the boattail curvature, at a mid location along the curvature and at a full location that spanned the entire boattail flap. Each of the baseline nozzle afterbodies (no convolutions) had a parabolic, converging contour with a parabolically decreasing corner radius. Data were obtained at several Mach numbers from static conditions to 1.2 for a range of nozzle pressure ratios and angles of attack. An oil paint flow visualization technique was used to qualitatively assess the effect of the convolutions. Results indicate that afterbody drag reduction by convoluted contouring is convolution location, Mach number, boattail angle, and NPR dependent. The forward convolution location was the most effective contouring geometry for drag reduction on the 22 afterbody, but was only effective for M < 0.95. At M = 0.8, drag was reduced 20 and 36 percent at NPRs of 5.4 and 7, respectively, but drag was increased 10 percent for M = 0.95 at NPR = 7. Convoluted contouring along the 15 boattail angle afterbody was not effective at reducing drag because the flow was minimally separated from the baseline afterbody, unlike the massive separation along the 22 boattail angle baseline afterbody.

  11. Experimental study of current loss and plasma formation in the Z machine post-hole convolute

    NASA Astrophysics Data System (ADS)

    Gomez, M. R.; Gilgenbach, R. M.; Cuneo, M. E.; Jennings, C. A.; McBride, R. D.; Waisman, E. M.; Hutsel, B. T.; Stygar, W. A.; Rose, D. V.; Maron, Y.

    2017-01-01

    The Z pulsed-power generator at Sandia National Laboratories drives high energy density physics experiments with load currents of up to 26 MA. Z utilizes a double post-hole convolute to combine the current from four parallel magnetically insulated transmission lines into a single transmission line just upstream of the load. Current loss is observed in most experiments and is traditionally attributed to inefficient convolute performance. The apparent loss current varies substantially for z-pinch loads with different inductance histories; however, a similar convolute impedance history is observed for all load types. This paper details direct spectroscopic measurements of plasma density, temperature, and apparent and actual plasma closure velocities within the convolute. Spectral measurements indicate a correlation between impedance collapse and plasma formation in the convolute. Absorption features in the spectra show the convolute plasma consists primarily of hydrogen, which likely forms from desorbed electrode contaminant species such as H2O , H2 , and hydrocarbons. Plasma densities increase from 1 ×1016 cm-3 (level of detectability) just before peak current to over 1 ×1017 cm-3 at stagnation (tens of ns later). The density seems to be highest near the cathode surface, with an apparent cathode to anode plasma velocity in the range of 35 - 50 cm /μ s . Similar plasma conditions and convolute impedance histories are observed in experiments with high and low losses, suggesting that losses are driven largely by load dynamics, which determine the voltage on the convolute.

  12. Estimation of neutron energy distributions from prompt gamma emissions

    NASA Astrophysics Data System (ADS)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  13. Evolution des contraintes résiduelles dans des films minces de tungstène en fonction de l'irradiation

    NASA Astrophysics Data System (ADS)

    Durand, N.; Badawi, K. F.; Goudeau, P.; Naudon, A.

    1994-01-01

    The influence of the irradiation dose upon the residual stresses in 1 000 Å tungsten thin films has been studied by two different techniques. Results show a relaxation of the strong initial compressive stresses σ=- 4,5 GPa) in virgin samples when the irradiation dose increases. The existence of a relaxation threshold is also clearly evidenced, it indicates a strong correlation between the thin film microstructure (point defects, grain size) and the relaxation phenomenon, and consequently, the residual stresses. Nous avons étudié, par deux méthodes différentes, l'évolution des contraintes résiduelles dans des couches minces de 1 000 Å de W en fonction de la dose d'irradiation. Ces expériences mettent en évidence une relaxation des fortes contraintes de compression (σ=- 4,5 GPa) observées dans les échantillons vierges quand la dose de l'irradiation augmente. Notre étude montre par ailleurs, l'existence d'un seuil de relaxation et relie de façon indiscutable, la microstructure de la couche mince (défauts ponctuels, taille de grains) au phénomène de relaxation, donc aux contraintes elles-mêmes.

  14. Keypoint Density-Based Region Proposal for Fine-Grained Object Detection and Classification Using Regions with Convolutional Neural Network Features

    DTIC Science & Technology

    2015-12-15

    Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep

  15. Genetics Home Reference: autosomal recessive congenital stationary night blindness

    MedlinePlus

    ... Moskova-Doumanova V, Berger W, Wissinger B, Hamel CP, Schorderet DF, De Baere E, Sharon D, Banin ... I, Defoort-Dhellemmes S, Wissinger B, Léveillard T, Hamel CP, Schorderet DF, De Baere E, Berger W, Jacobson ...

  16. Measurement of Compression Factor and Error Sensitivity Factor of the Modified READ Facsimile Coding Technique.

    DTIC Science & Technology

    1980-08-01

    Compression factor and error sensitivity together with statistical data have also been tabulated. This TIB is a companion drcument to NCS TIB’s 79-7...vu donner la priorit6 pour lour r~alisation. Chaque application est conf ice A un " chef do projet", responsable successivoment do sa conception. de son...pilote depend des r~sultats obtenus et fait I’objet d’une d~cision- de ’.a Direction Gdnerale. Ndanmoins, le chef do projet doit dOs le d~part consid~rer

  17. Comparison of Wavelet Packets With Cosine-Modulated Pseudo-QMF Bank for ECG Compression

    DTIC Science & Technology

    2001-10-25

    Ferreras2, P. Martín-Martín2 1Deparment of Ingeniería de Circuitos y Sistemas, Universidad Politécnica de Madrid, Madrid (Spain). E-mail: mblanco@ics.upm.es...Department of Ingenieria de Circuitos y Sistemas Universidad Politecnica de Madrid Madrid Spain Performing Organization Report Number Sponsoring

  18. Shape Perception and Navigation in Blind Adults

    PubMed Central

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  19. Histological Changes in the Thyroid Gland in Cases of Infant and Early Childhood Asphyxia-A Preliminary Study.

    PubMed

    Byard, Roger W; Bellis, Maria

    2016-05-01

    A retrospective blinded study of thyroid gland histology was undertaken in 50 infants and young children aged from 1 to 24 months. Deaths were due to (i) suffocation (N = 7), hanging (4), wedging (3), and chest and/or neck compression (4), and (ii) SIDS (20), noncervical trauma (7), organic disease, (4) and drug toxicity (1). In the asphyxia group (N = 18), thyroid gland congestion ranged from 0 to 3+ with 39% of cases (7/18) having moderate/marked congestion. In three cases, focal aggregates of red blood cells (blood islands) were observed within the intrafollicular colloid. These deaths involved chest compression, chest and/or neck compression, and crush asphyxia in a vehicle accident, and all had facial petechiae. Only 22% of the 32 control cases (7/32) had moderate/marked congestion with no blood islands being identified (p < 0.05). Blood islands within the thyroid gland may be caused by congestion associated with crushing or compression and may provide supportive evidence for this diagnosis. © 2016 American Academy of Forensic Sciences.

  20. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  2. Blind Deconvolution of Astronomical Images with a Constraint on Bandwidth Determined by the Parameters of the Optical System

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fan, Min; Shen, Mang-zuo

    2008-01-01

    Atmospheric turbulence severely restricts the spatial resolution of astronomical images obtained by a large ground-based telescope. In order to reduce effectively this effect, we propose a method of blind deconvolution, with a bandwidth constraint determined by the parameters of the telescope's optical system based on the principle of maximum likelihood estimation, in which the convolution error function is minimized by using the conjugate gradient algorithm. A relation between the parameters of the telescope optical system and the image's frequency-domain bandwidth is established, and the speed of convergence of the algorithm is improved by using the positivity constraint on the variables and the limited-bandwidth constraint on the point spread function. To avoid the effective Fourier frequencies exceed the cut-off frequency, it is required that each single image element (e.g., the pixel in the CCD imaging) in the sampling focal plane should be smaller than one fourth of the diameter of the diffraction spot. In the algorithm, no object-centered constraint was used, so the proposed method is suitable for the image restoration of a whole field of objects. By the computer simulation and by the restoration of an actually-observed image of α Piscium, the effectiveness of the proposed method is demonstrated.

  3. Convolution of Two Series

    ERIC Educational Resources Information Center

    Umar, A.; Yusau, B.; Ghandi, B. M.

    2007-01-01

    In this note, we introduce and discuss convolutions of two series. The idea is simple and can be introduced to higher secondary school classes, and has the potential of providing a good background for the well known convolution of function.

  4. A fast complex integer convolution using a hybrid transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; K Truong, T.

    1978-01-01

    It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.

  5. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  6. Modeling and Simulation of a Non-Coherent Frequency Shift Keying Transceiver Using a Field Programmable Gate Array (FPGA)

    DTIC Science & Technology

    2008-09-01

    Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and

  7. Metro Navigation for the Blind

    ERIC Educational Resources Information Center

    Sanchez, Jaime; Saenz, Mauricio

    2010-01-01

    This study evaluates the impact of using the software program AudioMetro, a tool that supports the orientation and mobility of people who are blind in the Metro system of Santiago de Chile. A quasi-experimental study considering experimental and control groups and using the paired Student's t in a two sample test analysis (pretest-posttest) was…

  8. Comparison of chest compressions in the standing position beside a bed at knee level and the kneeling position: a non-randomised, single-blind, cross-over trial.

    PubMed

    Oh, Je Hyeok; Kim, Chan Woong; Kim, Sung Eun; Lee, Sang Jin; Lee, Dong Hoon

    2014-07-01

    When rescuers perform cardiopulmonary resuscitation (CPR) from a standing position, the height at which chest compressions are carried out is raised. To determine whether chest compressions delivered on a bed adjusted to rescuer's knee height are as effective as those delivered on the floor. A total of 20 fourth-year medical students participated in the study. The students performed chest compressions for 2 min each on a manikin lying on the floor (test 1) and on a manikin lying on a bed (test 2). The average compression rate (ACR) and the average compression depth (ACD) were compared between the two tests. The ACR was not significantly different between tests 1 and 2 (120.1 to 132.9  vs 115.7 to 131.2 numbers/min, 95% CI, p=0.324). The ACD was also not significantly different between tests 1 and 2 (51.2 to 56.6 vs 49.4 to 55.7 mm, 95% CI, p=0.058). The results suggest that there may be no significant differences in compression rate and depth between CPR performed on manikins placed on the floor and those placed at a rescuer's knee height. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. A randomized control hands-on defibrillation study-Barrier use evaluation.

    PubMed

    Wampler, David; Kharod, Chetan; Bolleter, Scotty; Burkett, Alison; Gabehart, Caitlin; Manifold, Craig

    2016-06-01

    Chest compressions and defibrillation are the only therapies proven to increase survival in cardiac arrest. Historically, rescuers must remove hands to shock, thereby interrupting chest compressions. This hands-off time results in a zero blood flow state. Pauses have been associated with poorer neurological recovery. This was a blinded randomized control cadaver study evaluating the detection of defibrillation during manual chest compressions. An active defibrillator was connected to the cadaver in the sternum-apex configuration. The sham defibrillator was not connected to the cadaver. Subjects performed chest compressions using 6 barrier types: barehand, single and double layer nitrile gloves, firefighter gloves, neoprene pad, and a manual chest compression/decompression device. Randomized defibrillations (10 per barrier type) were delivered at 30 joules (J) for bare hand and 360J for all other barriers. After each shock, the subject indicated degree of sensation on a VAS scale. Ten subjects participated. All subjects detected 30j shocks during barehand compressions, with only 1 undetected real shock. All barriers combined totaled 500 shocks delivered. Five (1%) active shocks were detected, 1(0.2%) single layer of Nitrile, 3(0.6%) with double layer nitrile, and 1(0.2%) with the neoprene barrier. One sham shock was reported with the single layer nitrile glove. No shocks were detected with fire gloves or compression decompression device. All shocks detected barely perceptible (0.25(±0.05)cm on 10cm VAS scale). Nitrile gloves and neoprene pad prevent (99%) responder's detection of defibrillation of a cadaver. Fire gloves and compression decompression device prevented detection. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Laryngoscopic and spectral analysis of laryngeal and pharyngeal configuration in non-classical singing styles.

    PubMed

    Guzman, Marco; Lanas, Andres; Olavarria, Christian; Azocar, Maria Josefina; Muñoz, Daniel; Madrid, Sofia; Monsalve, Sebastian; Martinez, Francisca; Vargas, Sindy; Cortez, Pedro; Mayerhoff, Ross M

    2015-01-01

    The present study aimed to assess three different singing styles (pop, rock, and jazz) with laryngoscopic, acoustic, and perceptual analysis in healthy singers at different loudness levels. Special emphasis was given to the degree of anterior-posterior (A-P) laryngeal compression, medial laryngeal compression, vertical laryngeal position (VLP), and pharyngeal compression. Prospective study. Twelve female trained singers with at least 5 years of voice training and absence of any voice pathology were included. Flexible and rigid laryngeal endoscopic examinations were performed. Voice recording was also carried out. Four blinded judges were asked to assess laryngoscopic and auditory perceptual variables using a visual analog scale. All laryngoscopic parameters showed significant differences for all singing styles. Rock showed the greatest degree for all of them. Overall A-P laryngeal compression scores demonstrated significantly higher values than overall medial compression and VLP. High loudness level produced the highest degree of A-P compression, medial compression, pharyngeal compression, and the lowest VLP for all singing styles. Additionally, rock demonstrated the highest values for alpha ratio (less steep spectral slope), L1-L0 ratio (more glottal adduction), and Leq (more vocal intensity). Statistically significant differences between the three loudness levels were also found for these acoustic parameters. Rock singing seems to be the style with the highest degree of both laryngeal and pharyngeal activity in healthy singers. Although, supraglottic activity during singing could be labeled as hyperfunctional vocal behavior, it may not necessarily be harmful, but a strategy to avoid vocal fold damage. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Are Compression Stockings an Effective Treatment for Orthostatic Presyncope?

    PubMed Central

    Protheroe, Clare Louise; Dikareva, Anastasia; Menon, Carlo; Claydon, Victoria Elizabeth

    2011-01-01

    Background Syncope, or fainting, affects approximately 6.2% of the population, and is associated with significant comorbidity. Many syncopal events occur secondary to excessive venous pooling and capillary filtration in the lower limbs when upright. As such, a common approach to the management of syncope is the use of compression stockings. However, research confirming their efficacy is lacking. We aimed to investigate the effect of graded calf compression stockings on orthostatic tolerance. Methodology/Principal Findings We evaluated orthostatic tolerance (OT) and haemodynamic control in 15 healthy volunteers wearing graded calf compression stockings compared to two placebo stockings in a randomized, cross-over, double-blind fashion. OT (time to presyncope, min) was determined using combined head-upright tilting and lower body negative pressure applied until presyncope. Throughout testing we continuously monitored beat-to-beat blood pressures, heart rate, stroke volume and cardiac output (finger plethysmography), cerebral and forearm blood flow velocities (Doppler ultrasound) and breath-by-breath end tidal gases. There were no significant differences in OT between compression stocking (26.0±2.3 min) and calf (29.3±2.4 min) or ankle (27.6±3.1 min) placebo conditions. Cardiovascular, cerebral and respiratory responses were similar in all conditions. The efficacy of compression stockings was related to anthropometric parameters, and could be predicted by a model based on the subject's calf circumference and shoe size (r = 0.780, p = 0.004). Conclusions/Significance These data question the use of calf compression stockings for orthostatic intolerance and highlight the need for individualised therapy accounting for anthropometric variables when considering treatment with compression stockings. PMID:22194814

  12. In-situ neutron diffraction study on the tension-compression fatigue behavior of a twinning induced plasticity steel

    DOE PAGES

    Xie, Qingge; Liang, Jiangtao; Stoica, Alexandru Dan; ...

    2017-05-17

    Grain orientation dependent behavior during tension-compression type of fatigue loading in a TWIP steel was studied using in-situ neutron diffraction. Orientation zones with dominant behavior of (1) twinning-de-twinning, (2) twinning-re-twinning followed by twinning-de-twinning, (3) twinning followed by dislocation slip and (4) dislocation slip were identified. Jumps of the orientation density were evidenced in neutron diffraction peaks which explains the macroscopic asymmetric behavior. The asymmetric behavior in early stage of fatigue loading is mainly due to small volume fraction of twins in comparison with that at later stage. As a result, easy activation of the de-twin makes the macroscopically unloading behaviormore » nonlinear.« less

  13. In-situ neutron diffraction study on the tension-compression fatigue behavior of a twinning induced plasticity steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Qingge; Liang, Jiangtao; Stoica, Alexandru Dan

    Grain orientation dependent behavior during tension-compression type of fatigue loading in a TWIP steel was studied using in-situ neutron diffraction. Orientation zones with dominant behavior of (1) twinning-de-twinning, (2) twinning-re-twinning followed by twinning-de-twinning, (3) twinning followed by dislocation slip and (4) dislocation slip were identified. Jumps of the orientation density were evidenced in neutron diffraction peaks which explains the macroscopic asymmetric behavior. The asymmetric behavior in early stage of fatigue loading is mainly due to small volume fraction of twins in comparison with that at later stage. As a result, easy activation of the de-twin makes the macroscopically unloading behaviormore » nonlinear.« less

  14. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  15. Convolution of large 3D images on GPU and its decomposition

    NASA Astrophysics Data System (ADS)

    Karas, Pavel; Svoboda, David

    2011-12-01

    In this article, we propose a method for computing convolution of large 3D images. The convolution is performed in a frequency domain using a convolution theorem. The algorithm is accelerated on a graphic card by means of the CUDA parallel computing model. Convolution is decomposed in a frequency domain using the decimation in frequency algorithm. We pay attention to keeping our approach efficient in terms of both time and memory consumption and also in terms of memory transfers between CPU and GPU which have a significant inuence on overall computational time. We also study the implementation on multiple GPUs and compare the results between the multi-GPU and multi-CPU implementations.

  16. Pre-recorded instructional audio vs. dispatchers' conversational assistance in telephone cardiopulmonary resuscitation: A randomized controlled simulation study.

    PubMed

    Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey

    2018-01-01

    To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio ( n =57); or (2) verbal dispatcher assistance ( n =52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P >0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P <0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min -1 ) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.

  17. Pre-recorded instructional audio vs. dispatchers’ conversational assistance in telephone cardiopulmonary resuscitation: A randomized controlled simulation study

    PubMed Central

    Birkun, Alexei; Glotov, Maksim; Ndjamen, Herman Franklin; Alaiye, Esther; Adeleke, Temidara; Samarin, Sergey

    2018-01-01

    BACKGROUND: To assess the effectiveness of the telephone chest-compression-only cardiopulmonary resuscitation (CPR) guided by a pre-recorded instructional audio when compared with dispatcher-assisted resuscitation. METHODS: It was a prospective, blind, randomised controlled study involving 109 medical students without previous CPR training. In a standardized mannequin scenario, after the step of dispatcher-assisted cardiac arrest recognition, the participants performed compression-only resuscitation guided over the telephone by either: (1) the pre-recorded instructional audio (n=57); or (2) verbal dispatcher assistance (n=52). The simulation video records were reviewed to assess the CPR performance using a 13-item checklist. The interval from call reception to the first compression, total number and rate of compressions, total number and duration of pauses after the first compression were also recorded. RESULTS: There were no significant differences between the recording-assisted and dispatcher-assisted groups based on the overall performance score (5.6±2.2 vs. 5.1±1.9, P>0.05) or individual criteria of the CPR performance checklist. The recording-assisted group demonstrated significantly shorter time interval from call receipt to the first compression (86.0±14.3 vs. 91.2±14.2 s, P<0.05), higher compression rate (94.9±26.4 vs. 89.1±32.8 min-1) and number of compressions provided (170.2±48.0 vs. 156.2±60.7). CONCLUSION: When provided by untrained persons in the simulated settings, the compression-only resuscitation guided by the pre-recorded instructional audio is no less efficient than dispatcher-assisted CPR. Future studies are warranted to further assess feasibility of using instructional audio aid as a potential alternative to dispatcher assistance.

  18. Detailed investigation of Long-Period activity at Campi Flegrei by Convolutive Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.

    2016-04-01

    This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of the source, which can be used, by means of the small-intensity precursors, for hazard-model development and forecast-model testing, showing an illustrative example of the applicability of the CICA method to regions with low seismicity in high ambient noise.

  19. Dispatcher-assisted compression-only cardiopulmonary resuscitation provides best quality cardiopulmonary resuscitation by laypersons: A randomised controlled single-blinded manikin trial.

    PubMed

    Spelten, Oliver; Warnecke, Tobias; Wetsch, Wolfgang A; Schier, Robert; Böttiger, Bernd W; Hinkelbein, Jochen

    2016-08-01

    High-quality cardiopulmonary resuscitation (CPR) by laypersons is a key determinant of both outcome and survival for out-of-hospital cardiac arrest. Dispatcher-assisted CPR (telephone-CPR, T-CPR) increases the frequency and correctness of bystander-CPR but results in prolonged time to first chest compressions. However, it remains unclear whether instructions for rescue ventilation and/or chest compressions should be recommended for dispatcher-assisted CPR. The aim of this study was to evaluate both principles of T-CPR with respect to CPR quality. Randomised controlled single-blinded manikin trial. University Hospital of Cologne, Germany, 1 July 2012 to 30 September 2012. Sixty laypersons between 18 and 65 years. Medically educated individuals, medical professionals and pregnant women were excluded. Participants were asked to resuscitate a manikin and were randomised into three groups: not dispatcher-assisted (uninstructed) CPR (group 1; U-CPR; n = 20), dispatcher-assisted compression-only CPR (group 2; DACO-CPR; n = 19) and full dispatcher-assisted CPR with rescue ventilation (group 3; DAF-CPR; n = 19). Specific parameters of CPR quality [i.e. no-flow-time (NFT) as well as compression and ventilation parameters] were analysed. To compare different groups we used Student's t test and P less than 0.05 was considered significant. Initial NFT was lowest in the DACO-CPR group (mean 21.3 ± 14.4%), followed by dispatcher-assisted full CPR (mean 49.1 ± 8.5%) and by unassisted CPR (mean 55.0 ± 12.9%). Initial NFT covering the time of instruction was lower in DACO-CPR (12.1 ± 5.4%) as compared to dispatcher-assisted full CPR (20.7 ± 8.1%). Compression depth was similar in all three groups: 40.6 ± 13.0 mm (unassisted CPR), 41.0 ± 12.2 mm (DACO-CPR) and 38.8 ± 15.8 mm (dispatcher-assisted full CPR). Average compression frequency was highest in the DACO-CPR group (65.2 ± 22.4 min) compared with the unassisted CPR group (35.6 ± 24.2 min) and the dispatcher-assisted full CPR group (44.5 ± 10.8 min). Correct rescue ventilation was given in 3.1 ± 11.1% (unassisted CPR) and 1.6 ± 16.1% (dispatcher-assisted full CPR) of all ventilation attempts. Best quality of CPR was achieved by DACO-CPR because of superior compression frequencies and reduced NFT. In contrast, the full dispatcher-assisted CPR with a longer initial instructing phase (initial NFT) did not result in enhanced CPR quality or an optimised compression depth.

  20. Blind One-Bit Compressive Sampling

    DTIC Science & Technology

    2013-01-17

    14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0

  1. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  2. Development and application of deep convolutional neural network in target detection

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Wang, Chunping; Fu, Qiang

    2018-04-01

    With the development of big data and algorithms, deep convolution neural networks with more hidden layers have more powerful feature learning and feature expression ability than traditional machine learning methods, making artificial intelligence surpass human level in many fields. This paper first reviews the development and application of deep convolutional neural networks in the field of object detection in recent years, then briefly summarizes and ponders some existing problems in the current research, and the future development of deep convolutional neural network is prospected.

  3. Cost-effective handling of digital medical images in the telemedicine environment.

    PubMed

    Choong, Miew Keen; Logeswaran, Rajasvaran; Bister, Michel

    2007-09-01

    This paper concentrates on strategies for less costly handling of medical images. Aspects of digitization using conventional digital cameras, lossy compression with good diagnostic quality, and visualization through less costly monitors are discussed. For digitization of film-based media, subjective evaluation of the suitability of digital cameras as an alternative to the digitizer was undertaken. To save on storage, bandwidth and transmission time, the acceptable degree of compression with diagnostically no loss of important data was studied through randomized double-blind tests of the subjective image quality when compression noise was kept lower than the inherent noise. A diagnostic experiment was undertaken to evaluate normal low cost computer monitors as viable viewing displays for clinicians. The results show that conventional digital camera images of X-ray images were diagnostically similar to the expensive digitizer. Lossy compression, when used moderately with the imaging noise to compression noise ratio (ICR) greater than four, can bring about image improvement with better diagnostic quality than the original image. Statistical analysis shows that there is no diagnostic difference between expensive high quality monitors and conventional computer monitors. The results presented show good potential in implementing the proposed strategies to promote widespread cost-effective telemedicine and digital medical environments. 2006 Elsevier Ireland Ltd

  4. Phase transitions and melting on the Hugoniot of Mg2SiO4 forsterite: new diffraction and temperature results

    NASA Astrophysics Data System (ADS)

    Asimow, P. D.; Akin, M. C.; Homel, M.; Crum, R. S.; Pagan, D.; Lind, J.; Bernier, J.; Mosenfelder, J. L.; Dillman, A. M.; Lavina, B.; Lee, S.; Fat'yanov, O. V.; Newman, M. G.

    2017-06-01

    The phase transitions of forsterite under shock were studied by x-ray diffraction and pyrometry. Samples of 2 mm thick, near-full density (>98% TMD) polycrystalline forsterite were characterized by EBSD and computed tomography and shock compressed to 50 and 75 GPa by two-stage gas gun at the Dynamic Compression Sector, Advanced Photon Source, with diffraction imaged during compression and release. Changes in diffraction confirm a phase transition by 75 GPa. In parallel, single-crystal forsterite shock temperatures were taken from 120 to 210 GPa with improved absolute calibration procedures on the Caltech 6-channel pyrometer and two-stage gun and used to examine the interpretation of superheating and P-T slope of the liquid Hugoniot. This work performed under the auspices of the U.S. Department of Energy (DOE) by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, supported in part by LLNL's LDRD program under Grants 15-ERD-012 and 16-ERD-010. The Dynamic Compression Sector (35) is supported by DOE / National Nuclear Security Administration under Award Number DE-NA0002442. This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. Caltech lab supported by NSF EAR-1426526.

  5. Brain tissue deforms similarly to filled elastomers and follows consolidation theory

    NASA Astrophysics Data System (ADS)

    Franceschini, G.; Bigoni, D.; Regitnig, P.; Holzapfel, G. A.

    2006-12-01

    Slow, large deformations of human brain tissue—accompanying cranial vault deformation induced by positional plagiocephaly, occurring during hydrocephalus, and in the convolutional development—has surprisingly received scarce mechanical investigation. Since the effects of these deformations may be important, we performed a systematic series of in vitro experiments on human brain tissue, revealing the following features. (i) Under uniaxial (quasi-static), cyclic loading, brain tissue exhibits a peculiar nonlinear mechanical behaviour, exhibiting hysteresis, Mullins effect and residual strain, qualitatively similar to that observed in filled elastomers. As a consequence, the loading and unloading uniaxial curves have been found to follow the Ogden nonlinear elastic theory of rubber (and its variants to include Mullins effect and permanent strain). (ii) Loaded up to failure, the "shape" of the stress/strain curve qualitatively changes, evidencing softening related to local failure. (iii) Uniaxial (quasi-static) strain experiments under controlled drainage conditions provide the first direct evidence that the tissue obeys consolidation theory involving fluid migration, with properties similar to fine soils, but having much smaller volumetric compressibility. (iv) Our experimental findings also support the existence of a viscous component of the solid phase deformation. Brain tissue should, therefore, be modelled as a porous, fluid-saturated, nonlinear solid with very small volumetric (drained) compressibility.

  6. A spectral nudging method for the ACCESS1.3 atmospheric model

    NASA Astrophysics Data System (ADS)

    Uhe, P.; Thatcher, M.

    2015-06-01

    A convolution-based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow for flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10-30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.

  7. A spectral nudging method for the ACCESS1.3 atmospheric model

    NASA Astrophysics Data System (ADS)

    Uhe, P.; Thatcher, M.

    2014-10-01

    A convolution based method of spectral nudging of atmospheric fields is developed in the Australian Community Climate and Earth Systems Simulator (ACCESS) version 1.3 which uses the UK Met Office Unified Model version 7.3 as its atmospheric component. The use of convolutions allow flexibility in application to different atmospheric grids. An approximation using one-dimensional convolutions is applied, improving the time taken by the nudging scheme by 10 to 30 times compared with a version using a two-dimensional convolution, without measurably degrading its performance. Care needs to be taken in the order of the convolutions and the frequency of nudging to obtain the best outcome. The spectral nudging scheme is benchmarked against a Newtonian relaxation method, nudging winds and air temperature towards ERA-Interim reanalyses. We find that the convolution approach can produce results that are competitive with Newtonian relaxation in both the effectiveness and efficiency of the scheme, while giving the added flexibility of choosing which length scales to nudge.

  8. Automated Detection of Diabetic Retinopathy using Deep Learning.

    PubMed

    Lam, Carson; Yi, Darvin; Guo, Margaret; Lindsey, Tony

    2018-01-01

    Diabetic retinopathy is a leading cause of blindness among working-age adults. Early detection of this condition is critical for good prognosis. In this paper, we demonstrate the use of convolutional neural networks (CNNs) on color fundus images for the recognition task of diabetic retinopathy staging. Our network models achieved test metric performance comparable to baseline literature results, with validation sensitivity of 95%. We additionally explored multinomial classification models, and demonstrate that errors primarily occur in the misclassification of mild disease as normal due to the CNNs inability to detect subtle disease features. We discovered that preprocessing with contrast limited adaptive histogram equalization and ensuring dataset fidelity by expert verification of class labels improves recognition of subtle features. Transfer learning on pretrained GoogLeNet and AlexNet models from ImageNet improved peak test set accuracies to 74.5%, 68.8%, and 57.2% on 2-ary, 3-ary, and 4-ary classification models, respectively.

  9. Development of a Deep Learning Algorithm for Automatic Diagnosis of Diabetic Retinopathy.

    PubMed

    Raju, Manoj; Pagidimarri, Venkatesh; Barreto, Ryan; Kadam, Amrit; Kasivajjala, Vamsichandra; Aswath, Arun

    2017-01-01

    This paper mainly focuses on the deep learning application in classifying the stage of diabetic retinopathy and detecting the laterality of the eye using funduscopic images. Diabetic retinopathy is a chronic, progressive, sight-threatening disease of the retinal blood vessels. Ophthalmologists diagnose diabetic retinopathy through early funduscopic screening. Normally, there is a time delay in reporting and intervention, apart from the financial cost and risk of blindness associated with it. Using a convolutional neural network based approach for automatic diagnosis of diabetic retinopathy, we trained the prediction network on the publicly available Kaggle dataset. Approximately 35,000 images were used to train the network, which observed a sensitivity of 80.28% and a specificity of 92.29% on the validation dataset of ~53,000 images. Using 8,810 images, the network was trained for detecting the laterality of the eye and observed an accuracy of 93.28% on the validation set of 8,816 images.

  10. Protection of Health Imagery by Region Based Lossless Reversible Watermarking Scheme

    PubMed Central

    Priya, R. Lakshmi; Sadasivam, V.

    2015-01-01

    Providing authentication and integrity in medical images is a problem and this work proposes a new blind fragile region based lossless reversible watermarking technique to improve trustworthiness of medical images. The proposed technique embeds the watermark using a reversible least significant bit embedding scheme. The scheme combines hashing, compression, and digital signature techniques to create a content dependent watermark making use of compressed region of interest (ROI) for recovery of ROI as reported in literature. The experiments were carried out to prove the performance of the scheme and its assessment reveals that ROI is extracted in an intact manner and PSNR values obtained lead to realization that the presented scheme offers greater protection for health imageries. PMID:26649328

  11. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  12. Effet Bauschinger lors de la plasticité cyclique de l'aluminium pur monocristallin

    NASA Astrophysics Data System (ADS)

    Alhamany, A.; Chicois, J.; Fougères, R.; Hamel, A.

    1992-08-01

    This paper is concerned with the study of microscopic mechanisms which control the cyclic deformation of pure aluminium and especially with the analysis of the Bauschinger effect which appears in aluminium single crystals deformed by cyclic straining. Fatigue tests are performed on Al single crystals with the crystal axis parallel to [ overline{1}23] at room temperature, at plastic shear strain amplitudes in the range from 10^{-4} to 3× 10^{-3}. Mechanical saturation is not obtained at any strain level. Instead, a hardening-softening-secondary hardening sequence is found. The magnitude of the Bauschinger effect as the difference between yield stresses in traction and in compression, changes all along the fatigue loop and during the fatigue test. The Bauschinger effect disappears at two points of the fatigue loop, one in the traction part, the other in the compression one. At these points, the Bauschinger effect is inverted. Dislocation arrangement evolutions with fatigue conditions can explain the cyclic behaviour of Al single crystals. An heterogeneous dislocation distribution can be observed in the cyclically strained metal : dislocation tangles, long dislocation walls and dislocation cell walls, separated by dislocation poor channels appear in the material as a function of the cycle number. The long range internal stress necessary to ensure the compatibility of deformation between the hard and soft regions controls the observed Bauschinger effect. Ce travail s'inscrit dans le cadre de l'étude des mécanismes microsocopiques intervenant lors de la déformation cyclique de l'aluminium pur et concerne en particulier l'analyse de l'effet Bauschinger apparaissant au cours de la solliciation cyclique des monocristaux. L'étude a été menée à température ambiante sur des monocristaux d'aluminium pur orientés pour un glissement simple (axe [ overline{1}23] ), à des amplitudes de déformation plastique comprise entre 10^{-4} et quelques 10^{-3}. Nous n'avons pas obtenu de véritable saturation mécanique. Nous sommes en présence d'une séquence durcissement-adoucissement-durcissement secondaire. L'amplitude de l'effet Bauschinger considéré comme la différence entre les limites élastiques en traction et en compression mesurées selon une procédure appropriée, évolue le long d'une boucle de fatigue, s'annule pour deux points particuliers l'un en traction l'autre en compression. De part et d'autre de ces points, le signe de l'effet Bauschinger est inversé. Les microstructures des états fatigués sont caractérisés par une répartition hétérogène des dislocations constituée d'amas, de murs ou des parois, suivant le degré de déformation cyclique, séparés par des zones à faible densité de dislocations. Les contraintes internes liées aux incompatibilités de déformation résultant de cette répartition hétérogène des dislocations sont à l'origine de l'effet Bauschinger observé dans les monocristaux. Ces contraintes et l'évolution de la quantité de cellules de dislocations avec la fatigue expliquent le durcissement secondaire.

  13. Fuel Areal-Density Measurements in Laser-Driven Magnetized Inertial Fusion from Secondary Neutrons

    NASA Astrophysics Data System (ADS)

    Davies, J. R.; Barnak, D. H.; Betti, R.; Glebov, V. Yu.; Knauer, J. P.; Peebles, J. L.

    2017-10-01

    Laser-driven magnetized liner inertial fusion is being developed on the OMEGA laser to provide the first data at a significantly smaller scale than the Z pulsed-power machine in order to test scaling and to provide more shots with better diagnostic access than Z. In OMEGA experiments, a 0.6-mm-outer-diam plastic cylinder filled with 11 atm of D2 is placed in an axial magnetic field of 10 T, the D2 is preheated by a single beam along the axis, and then the cylinder is compressed by 40 beams. Secondary DT neutron yields provide a measurement of the areal density of the compressed D2 because the compressed fuel is much smaller than the mean free path and the Larmor radius of the T produced in D-D fusion. Measured secondary yields confirm theoretical predictions that preheating and magnetization reduce fuel compression. Higher fuel compression is found to consistently lead to lower neutron yields, which is not predicted by simulations. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000568 and the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  14. Performance Bounds on Two Concatenated, Interleaved Codes

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Dolinar, Samuel

    2010-01-01

    A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).

  15. Cross-Layer Design for Robust and Scalable Video Transmission in Dynamic Wireless Environment

    DTIC Science & Technology

    2011-02-01

    code rate convolutional codes or prioritized Rate - Compatible Punctured ...34New rate - compatible punctured convolutional codes for Viterbi decoding," IEEE Trans. Communications, Volume 42, Issue 12, pp. 3073-3079, Dec...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise

  16. A Video Transmission System for Severely Degraded Channels

    DTIC Science & Technology

    2006-07-01

    rate compatible punctured convolutional codes (RCPC) . By separating the SPIHT bitstream...June 2000. 149 [170] J. Hagenauer, Rate - compatible punctured convolutional codes (RCPC codes ) and their applications, IEEE Transactions on...Farvardin [160] used rate compatible convolutional codes . They noticed that for some transmission rates , one of their EEP schemes, which may

  17. There is no MacWilliams identity for convolutional codes. [transmission gain comparison

    NASA Technical Reports Server (NTRS)

    Shearer, J. B.; Mceliece, R. J.

    1977-01-01

    An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.

  18. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

    PubMed Central

    Qu, Xiaobo; He, Yifan

    2018-01-01

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666

  19. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    PubMed

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  20. On signals faint and sparse: The ACICA algorithm for blind de-trending of exoplanetary transits with low signal-to-noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk

    2014-01-01

    Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here wemore » present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.« less

  1. Automated segmentation of geographic atrophy using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.

    2018-02-01

    Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.

  2. Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.

    2010-03-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.

  3. Concordance and acceptability of electric stimulation therapy: a randomised controlled trial.

    PubMed

    Miller, C; McGuiness, W; Wilson, S; Cooper, K; Swanson, T; Rooney, D; Piller, N; Woodward, M

    2017-08-02

    A pilot single-blinded randomised controlled trial (RCT) was conducted to examine concordance with and acceptability of electric stimulation therapy (EST) in patients with venous leg ulcers (VLUs) who had not tolerated moderate to high compression. Participants were randomised to the intervention group (n=15) or a placebo control group (n=8) in which EST was used four times daily for 20 minutes per session. Participants were monitored for eight weeks during which time concordance with the treatment and perceptions of the treatment were assessed. Concordance with the total recommended treatment time was 71.4% for the intervention group and 82.9% for the control group; a difference that was not statistically significant. Participants rated EST as acceptable (84.6% intervention; 83.3% control), only two participants, both from the placebo control group, would not be willing to use EST again. The majority considered EST easier to use than compression (68.4%). EST was a practical and acceptable treatment among people who have been unable to tolerate moderate to high compression therapy.

  4. Brief compression-only cardiopulmonary resuscitation training video and simulation with homemade mannequin improves CPR skills.

    PubMed

    Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H

    2016-11-29

    Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p < 0.001). The CPR-certified group had adequate pre and post-test compression rates (>100/min), but an improved number of compressions with correct release (53.5 to 94.7, p < 0.001). Both groups had significantly reduced hands-off time after training. Achieving adequate compression depths (>50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained participants. This training method allows for widespread compression-only CPR training with a tactile learning component, without fees or advance preparation.

  5. White Pre-Service Teachers and "De-Privileged Spaces"

    ERIC Educational Resources Information Center

    Adair, Jennifer

    2008-01-01

    In their classic article, "Culture as Disability," McDermott and Varenne (1995) retell the fable of the seeing man who, upon finding himself in the "country of the blind" thought he could easily rule it. His efforts were fruitless because he could not make sense of their world. Daily life was set up for the blind to be successful. The seeing man…

  6. Effects of the Right Carotid Sinus Compression Technique on Blood Pressure and Heart Rate in Medicated Patients with Hypertension.

    PubMed

    Campón-Checkroun, Angélica María; Luceño-Mardones, Agustín; Riquelme, Inmaculada; Oliva-Pascual-Vaca, Jesús; Ricard, François; Oliva-Pascual-Vaca, Ángel

    2018-05-07

    To identify the immediate and middle-term effects of the right carotid sinus compression technique on blood pressure and heart rate in hypertensive patients. Randomized blinded experimental study. Primary health centers of Cáceres (Spain). Sixty-four medicated patients with hypertension were randomly assigned to an intervention group (n = 33) or to a control group (n = 31). In the intervention group a compression of the right carotid sinus was applied for 20 sec. In the control group, a placebo technique of placing hands on the radial styloid processes was performed. Blood pressure and heart rate were measured in both groups before the intervention (preintervention), immediately after the intervention, 5 min after the intervention, and 60 min after the intervention. The intervention group significantly decreased systolic and diastolic blood pressure and heart rate immediately after the intervention, with a large clinical effect; systolic blood pressure remained reduced 5 min after the intervention, and heart rate remained reduced 60 min after the intervention. No significant changes were observed in the control group. Right carotid sinus compression could be clinically useful for regulating acute hypertension.

  7. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  8. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  9. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. A comparison of the convolution and TMR10 treatment planning algorithms for Gamma Knife® radiosurgery

    PubMed Central

    Wright, Gavin; Harrold, Natalie; Bownes, Peter

    2018-01-01

    Aims To compare the accuracies of the convolution and TMR10 Gamma Knife treatment planning algorithms, and assess the impact upon clinical practice of implementing convolution-based treatment planning. Methods Doses calculated by both algorithms were compared against ionisation chamber measurements in homogeneous and heterogeneous phantoms. Relative dose distributions calculated by both algorithms were compared against film-derived 2D isodose plots in a heterogeneous phantom, with distance-to-agreement (DTA) measured at the 80%, 50% and 20% isodose levels. A retrospective planning study compared 19 clinically acceptable metastasis convolution plans against TMR10 plans with matched shot times, allowing novel comparison of true dosimetric parameters rather than total beam-on-time. Gamma analysis and dose-difference analysis were performed on each pair of dose distributions. Results Both algorithms matched point dose measurement within ±1.1% in homogeneous conditions. Convolution provided superior point-dose accuracy in the heterogeneous phantom (-1.1% v 4.0%), with no discernible differences in relative dose distribution accuracy. In our study convolution-calculated plans yielded D99% 6.4% (95% CI:5.5%-7.3%,p<0.001) less than shot matched TMR10 plans. For gamma passing criteria 1%/1mm, 16% of targets had passing rates >95%. The range of dose differences in the targets was 0.2-4.6Gy. Conclusions Convolution provides superior accuracy versus TMR10 in heterogeneous conditions. Implementing convolution would result in increased target doses therefore its implementation may require a revaluation of prescription doses. PMID:29657896

  11. Aeronautical audio broadcasting via satellite

    NASA Technical Reports Server (NTRS)

    Tzeng, Forrest F.

    1993-01-01

    A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.

  12. Mission science value-cost savings from the Advanced Imaging Communication System (AICS)

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1984-01-01

    An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.

  13. Design of Intelligent Cross-Layer Routing Protocols for Airborne Wireless Networks Under Dynamic Spectrum Access Paradigm

    DTIC Science & Technology

    2011-05-01

    rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The

  14. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  15. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  16. High Performance Implementation of 3D Convolutional Neural Networks on a GPU.

    PubMed

    Lan, Qiang; Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.

  17. High Performance Implementation of 3D Convolutional Neural Networks on a GPU

    PubMed Central

    Wang, Zelong; Wen, Mei; Zhang, Chunyuan; Wang, Yijie

    2017-01-01

    Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version. PMID:29250109

  18. Détermination des densités de charge d'espace dans les isolants solides par la méthode de l'onde thermique

    NASA Astrophysics Data System (ADS)

    Toureille, A.; Reboul, J.-P.; Merle, P.

    1991-01-01

    A non-destructive method for the measurement of space charge densities in solid insulating materials is described. This method called “ the thermal step technique ” is concerned with the diffusion of a step of heat applied to one side of the sample and with the resulting nonuniform thermal expansion. From the solution of the equation of heat, we have set up the relations between the measured current and the space charge densities. The deconvolution procedure leading to these charge densities is presented. Some results obtained with this method on XLPE and polypropylene slabs are given. Une nouvelle méthode non destructive, pour la mesure des densités de charges d'espace dans les isolants solides, est décrite. Cette méthode, dite de “ l'onde thermique ” est basée sur la diffusion d'un front de chaleur appliqué à une des faces de l'éprouvette et sur la dilatation non uniforme qui en résulte. A partir de la résolution de l'équation de la chaleur, nous avons établi les relations entre le courant mesuré et les densités de charges. Nous indiquons ensuite un procédé de déconvolution permettant de calculer ces densités de charge. Quelques résultats obtenus par cette méthode, sur des plaques de polyéthylène réticulé et polypropylène sont donnés.

  19. Bone age detection via carpogram analysis using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Torres, Felipe; Bravo, María. Alejandra; Salinas, Emmanuel; Triana, Gustavo; Arbeláez, Pablo

    2017-11-01

    Bone age assessment is a critical factor for determining delayed development in children, which can be a sign of pathologies such as endocrine diseases, growth abnormalities, chromosomal, neurological and congenital disorders among others. In this paper we present BoneNet, a methodology to assess automatically the skeletal maturity state in pediatric patients based on Convolutional Neural Networks. We train and evaluate our algorithm on a database of X-Ray images provided by the hospital Fundacion Santa Fe de Bogot ´ a with around 1500 images of patients between the ages 1 to 18. ´ We compare two different architectures to classify the given data in order to explore the generality of our method. To accomplish this, we define multiple binary age assessment problems, dividing the data by bone age and differentiating the patients by their gender. Thus, exploring several parameters, we develop BoneNet. Our approach is holistic, efficient, and modular, since it is possible for the specialists to use all the networks combined to determine how is the skeletal maturity of a patient. BoneNet achieves over 90% accuracy for most of the critical age thresholds, when differentiating the images between over or under a given age.

  20. Advanced Signal Processing Techniques Applied to Terahertz Inspections on Aerospace Foams

    NASA Technical Reports Server (NTRS)

    Trinh, Long Buu

    2009-01-01

    The space shuttle's external fuel tank is thermally insulated by the closed cell foams. However, natural voids composed of air and trapped gas are found as by-products when the foams are cured. Detection of foam voids and foam de-bonding is a formidable task owing to the small index of refraction contrast between foam and air (1.04:1). In the presence of a denser binding matrix agent that bonds two different foam materials, time-differentiation of filtered terahertz signals can be employed to magnify information prior to the main substrate reflections. In the absence of a matrix binder, de-convolution of the filtered time differential terahertz signals is performed to reduce the masking effects of antenna ringing. The goal is simply to increase probability of void detection through image enhancement and to determine the depth of the void.

  1. Convoluted nozzle design for the RL10 derivative 2B engine

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The convoluted nozzle is a conventional refractory metal nozzle extension that is formed with a portion of the nozzle convoluted to show the extendible nozzle within the length of the rocket engine. The convoluted nozzle (CN) was deployed by a system of four gas driven actuators. For spacecraft applications the optimum CN may be self-deployed by internal pressure retained, during deployment, by a jettisonable exit closure. The convoluted nozzle is included in a study of extendible nozzles for the RL10 Engine Derivative 2B for use in an early orbit transfer vehicle (OTV). Four extendible nozzle configurations for the RL10-2B engine were evaluated. Three configurations of the two position nozzle were studied including a hydrogen dump cooled metal nozzle and radiation cooled nozzles of refractory metal and carbon/carbon composite construction respectively.

  2. Local dynamic range compensation for scanning electron microscope imaging system by sub-blocking multiple peak HE with convolution.

    PubMed

    Sim, K S; Teh, V; Tey, Y C; Kho, T K

    2016-11-01

    This paper introduces new development technique to improve the Scanning Electron Microscope (SEM) image quality and we name it as sub-blocking multiple peak histogram equalization (SUB-B-MPHE) with convolution operator. By using this new proposed technique, it shows that the new modified MPHE performs better than original MPHE. In addition, the sub-blocking method consists of convolution operator which can help to remove the blocking effect for SEM images after applying this new developed technique. Hence, by using the convolution operator, it effectively removes the blocking effect by properly distributing the suitable pixel value for the whole image. Overall, the SUB-B-MPHE with convolution outperforms the rest of methods. SCANNING 38:492-501, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  3. Irradiation of Materials using Short, Intense Ion Beams

    NASA Astrophysics Data System (ADS)

    Seidl, Peter; Ji, Q.; Persaud, A.; Feinberg, E.; Silverman, M.; Sulyman, A.; Waldron, W. L.; Schenkel, T.; Barnard, J. J.; Friedman, A.; Grote, D. P.; Gilson, E. P.; Kaganovich, I. D.; Stepanov, A.; Zimmer, M.

    2016-10-01

    We present experiments studying material properties created with nanosecond and millimeter-scale ion beam pulses on the Neutralized Drift Compression Experiment-II at Berkeley Lab. The explored scientific topics include the dynamics of ion induced damage in materials, materials synthesis far from equilibrium, warm dense matter and intense beam-plasma physics. We describe the improved accelerator performance, diagnostics and results of beam-induced irradiation of thin samples of, e.g., tin and silicon. Bunches with >3x1010 ions/pulse with 1-mm radius and 2-30 ns FWHM duration and have been created. To achieve the short pulse durations and mm-scale focal spot radii, the 1.2 MeV He+ ion beam is neutralized in a drift compression section which removes the space charge defocusing effect during the final compression and focusing. Quantitative comparison of detailed particle-in-cell simulations with the experiment play an important role in optimizing the accelerator performance and keep pace with the accelerator repetition rate of <1/minute. This work was supported by the Office of Science of the US Department of Energy under contracts DE-AC0205CH11231 (LBNL), DE-AC52-07NA27344 (LLNL) and DE-AC02-09CH11466 (PPPL).

  4. Scalable Video Transmission Over Multi-Rate Multiple Access Channels

    DTIC Science & Technology

    2007-06-01

    Rate - compatible punctured convolutional codes (RCPC codes ) and their ap- plications,” IEEE...source encoded using the MPEG-4 video codec. The source encoded bitstream is then channel encoded with Rate Compatible Punctured Convolutional (RCPC...Clark, and J. M. Geist, “ Punctured convolutional codes or rate (n-1)/n and simplified maximum likelihood decoding,” IEEE Transactions on

  5. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  6. Textilome abdominal, à propos d'un cas

    PubMed Central

    Erguibi, Driss; Hassan, Robleh; Ajbal, Mohamed; Kadiri, Bouchaib

    2015-01-01

    Le textilome, également appelé gossybipomas, est une complication postopératoire très rare. Il peut s'agir d'un corps étranger composé de compresse(s) ou champ(s) chirurgicaux oubliés au niveau d'un foyer opératoire. Ils sont plus souvent asymptomatiques, et difficile à diagnostiquer. En particulier, les cas chroniques ne présentent pas de signes cliniques et radiologiques spécifiques pour le diagnostic différentiel. L'anamnèse est donc indispensable pour le diagnostic vu que les signes cliniques ne sont pas concluants. Le cliché d'abdomen sans préparation est peu contributif, l’échographie est fiable. La tomodensitométrie permet un diagnostic topographique précis, mais ce n'est pas toujours le cas. Certaines équipes proposent des explorations par IRM. Nous rapportons un cas de textilome intra abdominal, chez une patiente de 31 ans opérée il y a 8 ans pour grossesse extra-utérine, chez qui la TDM abdomino-pelvienne a évoqué un kyste hydatique péritonéale sans localisation du foie. Traitée par extrait d'un petit champ de 25x15cm et adhérant au sigmoïde. Le but de ce travail est de mettre en évidence le problème de diagnostic de cette pathologie et l'importance de la laparotomie exploratrice. PMID:26523184

  7. Computational analysis of current-loss mechanisms in a post-hole convolute driven by magnetically insulated transmission lines

    DOE PAGES

    Rose, D.  V.; Madrid, E.  A.; Welch, D.  R.; ...

    2015-03-04

    Numerical simulations of a vacuum post-hole convolute driven by magnetically insulated vacuum transmission lines (MITLs) are used to study current losses due to charged particle emission from the MITL-convolute-system electrodes. This work builds on the results of a previous study [E.A. Madrid et al. Phys. Rev. ST Accel. Beams 16, 120401 (2013)] and adds realistic power pulses, Ohmic heating of anode surfaces, and a model for the formation and evolution of cathode plasmas. The simulations suggest that modestly larger anode-cathode gaps in the MITLs upstream of the convolute result in significantly less current loss. In addition, longer pulse durations leadmore » to somewhat greater current loss due to cathode-plasma expansion. These results can be applied to the design of future MITL-convolute systems for high-current pulsed-power systems.« less

  8. Classification of urine sediment based on convolution neural network

    NASA Astrophysics Data System (ADS)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Jesse S.; Sinogeikin, Stanislav V.; Lin, Chuanlong

    Complementary advances in high pressure research apparatus and techniques make it possible to carry out time-resolved high pressure research using what would customarily be considered static high pressure apparatus. This work specifically explores time-resolved high pressure x-ray diffraction with rapid compression and/or decompression of a sample in a diamond anvil cell. Key aspects of the synchrotron beamline and ancillary equipment are presented, including source considerations, rapid (de)compression apparatus, high frequency imaging detectors, and software suitable for processing large volumes of data. A number of examples are presented, including fast equation of state measurements, compression rate dependent synthesis of metastable statesmore » in silicon and germanium, and ultrahigh compression rates using a piezoelectric driven diamond anvil cell.« less

  10. An adaptive distributed data aggregation based on RCPC for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hua, Guogang; Chen, Chang Wen

    2006-05-01

    One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks

  11. A generalized four-fifth law for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Aluie, Hussein

    2016-11-01

    Kolmogorov's 4/5-th law is a celebrated exact result of incompressible turbulence, and is key to the formulation of his 1941 phenomenology. We will present its generalization to compressible turbulence. Partial support was provided by NSF Grant OCE-1259794, US Department of Energy (US DOE) Grant DE-SC0014318, and the LANL LDRD program through Project Number 20150568ER.

  12. Dynamic Negative Compressibility of Few-Layer Graphene, h-BN, and MoS2

    NASA Astrophysics Data System (ADS)

    Neves, Bernardo; Barboza, Ana Paula; Chacham, Helio; Oliveira, Camilla; Fernandes, Thales; Martins Ferreira, Erlon; Archanjo, Braulio; Batista, Ronaldo; Oliveira, Alan

    2013-03-01

    We report a novel mechanical response of few-layer graphene, h-BN, and MoS2 to the simultaneous compression and shear by an atomic force microscope (AFM) tip. The response is characterized by the vertical expansion of these two-dimensional (2D) layered materials upon compression. Such effect is proportional to the applied load, leading to vertical strain values (opposite to the applied force) of up to 150%. The effect is null in the absence of shear, increases with tip velocity, and is anisotropic. It also has similar magnitudes in these solid lubricant materials (few-layer graphene, h-BN, and MoS2), but it is absent in single-layer graphene and in few-layer mica and Bi2Se3. We propose a physical mechanism for the effect where the combined compressive and shear stresses from the tip induce dynamical wrinkling on the upper material layers, leading to the observed flake thickening. The new effect (and, therefore, the proposed wrinkling) is reversible in the three materials where it is observed.[2] Financial support from CNPq, Fapemig, Rede Nacional de Pesquisa em Nanotubos de Carbono and INCT-Nano-Carbono

  13. Linear diffusion-wave channel routing using a discrete Hayami convolution method

    Treesearch

    Li Wang; Joan Q. Wu; William J. Elliot; Fritz R. Feidler; Sergey Lapin

    2014-01-01

    The convolution of an input with a response function has been widely used in hydrology as a means to solve various problems analytically. Due to the high computation demand in solving the functions using numerical integration, it is often advantageous to use the discrete convolution instead of the integration of the continuous functions. This approach greatly reduces...

  14. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  15. A Geometric Construction of Cyclic Cocycles on Twisted Convolution Algebras

    NASA Astrophysics Data System (ADS)

    Angel, Eitan

    2010-09-01

    In this thesis we give a construction of cyclic cocycles on convolution algebras twisted by gerbes over discrete translation groupoids. In his seminal book, Connes constructs a map from the equivariant cohomology of a manifold carrying the action of a discrete group into the periodic cyclic cohomology of the associated convolution algebra. Furthermore, for proper étale groupoids, J.-L. Tu and P. Xu provide a map between the periodic cyclic cohomology of a gerbe twisted convolution algebra and twisted cohomology groups. Our focus will be the convolution algebra with a product defined by a gerbe over a discrete translation groupoid. When the action is not proper, we cannot construct an invariant connection on the gerbe; therefore to study this algebra, we instead develop simplicial notions related to ideas of J. Dupont to construct a simplicial form representing the Dixmier-Douady class of the gerbe. Then by using a JLO formula we define a morphism from a simplicial complex twisted by this simplicial Dixmier-Douady form to the mixed bicomplex of certain matrix algebras. Finally, we define a morphism from this complex to the mixed bicomplex computing the periodic cyclic cohomology of the twisted convolution algebras.

  16. Etude de la structure électronique et magnétique de CrO_2

    NASA Astrophysics Data System (ADS)

    Matar, S.; Demazeau, G.; Sticht, J.; Eyert, V.; Kübler, J.

    1992-03-01

    The electronic and magnetic properties of CrO2 were investigated using the self-consistent A.S.W. method in a new approach to study the evolution of its magnetic properties at deceasing volume and to assess recent photoemission results on thin films. The results show that a magnetic transition of ferro Rightarrow antiferromagnetic type is likely to be induced under pressure. Experimental results could be explained by a compression of the CrO6 octahedron within the cell. Les propriétés électroniques de CrO2 ont été déterminées par la méthode auto-cohérente de l'onde sphérique augmentée : A.S.W. Cette étude est menée dans le cadre d'une nouvelle approche visant à définir lévolution de ses propriétés magnétiques à volume décroissant d'une part et à élucider les résultats expérimentaux récents de photoémission sur couches minces d'autre part. Les résultats des calculs montrent qu'une transition magnétique de type ferro Rightarrow antiferromagnétique est susceptible d'être induite sous l'effet de la pression. Les résultats expérimentaux pourraient être interprétés par une compression de l'octaèdre CrO6 au sein de la maille.

  17. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-02-15

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation andmore » practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.« less

  18. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  19. Simultaneous Retrieval of Temperature, Water Vapor and Ozone Atmospheric Profiles from IASI: Compression, De-noising, First Guess Retrieval and Inversion Algorithms

    NASA Technical Reports Server (NTRS)

    Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.

  20. Signal Detection and Frame Synchronization of Multiple Wireless Networking Waveforms

    DTIC Science & Technology

    2007-09-01

    punctured to obtain coding rates of 2 3 and 3 4 . Convolutional forward error correction coding is used to detect and correct bit...likely to be isolated and be correctable by the convolutional decoder. 44 Data rate (Mbps) Modulation Coding Rate Coded bits per subcarrier...binary convolutional code . A shortened Reed-Solomon technique is employed first. The code is shortened depending upon the data

  1. Using convolutional decoding to improve time delay and phase estimation in digital communications

    DOEpatents

    Ormesher, Richard C [Albuquerque, NM; Mason, John J [Albuquerque, NM

    2010-01-26

    The time delay and/or phase of a communication signal received by a digital communication receiver can be estimated based on a convolutional decoding operation that the communication receiver performs on the received communication signal. If the original transmitted communication signal has been spread according to a spreading operation, a corresponding despreading operation can be integrated into the convolutional decoding operation.

  2. X-Ray Thomson Scattering and Radiography from Spherical Implosions on the OMEGA Laser

    NASA Astrophysics Data System (ADS)

    Saunders, A. M.; Laziki-Jenei, A.; Doeppner, T.; Landen, O. L.; MacDonald, M.; Nilsen, J.; Swift, D.; Falcone, R. W.

    2017-10-01

    X-ray Thomson scattering (XRTS) is an experimental technique that directly probes the physics of warm dense matter by measuring electron density, electron temperature, and ionization state. XRTS in combination with x-ray radiography offers a unique ability to measure an absolute equation of state (EOS) from material under compression. Recent experiments highlight uncertainties in EOS models and the predicted ionization of compressed matter, suggesting more validation of models is needed. We present XRTS and x-ray radiography measurements taken at the OMEGA Laser Facility from directly-driven solid carbon spheres at densities on the order of 1x1024 g cm-3 and temperatures on the order of 30 eV. The results shed light on the equations of state of matter under compression. This work performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and under the Stewardship Science Graduate Fellowship, Grant Number DE- NA0002135.

  3. Comparison of 4 supraglotttic devices used by paramedics during simulated CPR : a randomized controlled crossover trial.

    PubMed

    Szarpak, Łukasz; Kurowski, Andrzej; Truszewski, Zenon; Robak, Oliver; Frass, Michael

    2015-08-01

    Ensuring an open airway during cardiopulmonary resuscitation is fundamental. The aim of this study was to determine the success rate of blind intubation during simulated cardiopulmonary resuscitation by untrained personnel. Four devices were compared in a simulated resuscitation scenario: ILMA (Intavent Direct Ltd, Buckinghamshire, United Kingdom), Cobra PLA (Engineered Medical Systems Inc, Indianapolis, IN), Supraglottic Airway Laryngopharyngeal Tube (SALT) (ECOLAB, St. Paul, MN), and Air-Q (Mercury Medical, Clearwater, FL). A group of 210 paramedics intubated a manikin with continuous chest compressions. The mean times to intubation were 40.46 ± 4.64, 33.96 ± 6.23, 17.2 ± 4.63, and 49.23 ± 13.19 seconds (SALT vs ILMA, Cobra PLA, and Air-Q; P < .05). The success ratios of blind intubation for the devices were 86.7%, 85.7%, 100%, and 71.4% (SALT vs ILMA, Cobra PLA, and Air-Q; P < .05). The study showed that the most efficient device with the shortest blind intubation time was the SALT device. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Final Report for DE-AR0000708

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donohue, Marc; Aranovich, Gregory; Wang, Chao

    This project determined the effect of adsorption compression on the rates of catalytic chemical reactions. It was shown that in regions of strong adsorption compression there is a dramatic increase in the rate of catalytic chemical reaction. Experiments focused on the conversion of NO to molecular nitrogen and oxygen. Data analysis techniques were developed to allow interpretation of experimental data and prediction of conditions for optimal reaction rates.

  5. Single image super-resolution based on convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zou, Lamei; Luo, Ming; Yang, Weidong; Li, Peng; Jin, Liujia

    2018-03-01

    We present a deep learning method for single image super-resolution (SISR). The proposed approach learns end-to-end mapping between low-resolution (LR) images and high-resolution (HR) images. The mapping is represented as a deep convolutional neural network which inputs the LR image and outputs the HR image. Our network uses 5 convolution layers, which kernels size include 5×5, 3×3 and 1×1. In our proposed network, we use residual-learning and combine different sizes of convolution kernels at the same layer. The experiment results show that our proposed method performs better than the existing methods in reconstructing quality index and human visual effects on benchmarked images.

  6. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  7. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  8. Molecular graph convolutions: moving beyond fingerprints

    NASA Astrophysics Data System (ADS)

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  9. Molecular graph convolutions: moving beyond fingerprints.

    PubMed

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-08-01

    Molecular "fingerprints" encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph-atoms, bonds, distances, etc.-which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement.

  10. Simulation d'ecoulements internes compressibles laminaires et turbulents par une methode d'elements finis

    NASA Astrophysics Data System (ADS)

    Rebaine, Ali

    1997-08-01

    Ce travail consiste en la simulation numerique des ecoulements internes compressibles bidimensionnels laminaires et turbulents. On s'interesse, particulierement, aux ecoulements dans les ejecteurs supersoniques. Les equations de Navier-Stokes sont formulees sous forme conservative et utilisent, comme variables independantes, les variables dites enthalpiques a savoir: la pression statique, la quantite de mouvement et l'enthalpie totale specifique. Une formulation variationnelle stable des equations de Navier-Stokes est utilisee. Elle est base sur la methode SUPG (Streamline Upwinding Petrov Galerkin) et utilise un operateur de capture des forts gradients. Un modele de turbulence, pour la simulation des ecoulements dans les ejecteurs, est mis au point. Il consiste a separer deux regions distinctes: une region proche de la paroi solide, ou le modele de Baldwin et Lomax est utilise et l'autre, loin de la paroi, ou une formulation nouvelle, basee sur le modele de Schlichting pour les jets, est proposee. Une technique de calcul de la viscosite turbulente, sur un maillage non structure, est implementee. La discretisation dans l'espace de la forme variationnelle est faite a l'aide de la methode des elements finis en utilisant une approximation mixte: quadratique pour les composantes de la quantite de mouvement et de la vitesse et lineaire pour le reste des variables. La discretisation temporelle est effectuee par une methode de differences finies en utilisant le schema d'Euler implicite. Le systeme matriciel, resultant de la discretisation spatio-temporelle, est resolu a l'aide de l'algorithme GMRES en utilisant un preconditionneur diagonal. Les validations numeriques ont ete menees sur plusieurs types de tuyeres et ejecteurs. La principale validation consiste en la simulation de l'ecoulement dans l'ejecteur teste au centre de recherche NASA Lewis. Les resultats obtenus sont tres comparables avec ceux des travaux anterieurs et sont nettement superieurs concernant les ecoulements turbulents dans les ejecteurs.

  11. Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture.

    PubMed

    Esses, Steven J; Lu, Xiaoguang; Zhao, Tiejun; Shanbhogue, Krishna; Dane, Bari; Bruno, Mary; Chandarana, Hersh

    2018-03-01

    To develop and test a deep learning approach named Convolutional Neural Network (CNN) for automated screening of T 2 -weighted (T 2 WI) liver acquisitions for nondiagnostic images, and compare this automated approach to evaluation by two radiologists. We evaluated 522 liver magnetic resonance imaging (MRI) exams performed at 1.5T and 3T at our institution between November 2014 and May 2016 for CNN training and validation. The CNN consisted of an input layer, convolutional layer, fully connected layer, and output layer. 351 T 2 WI were anonymized for training. Each case was annotated with a label of being diagnostic or nondiagnostic for detecting lesions and assessing liver morphology. Another independently collected 171 cases were sequestered for a blind test. These 171 T 2 WI were assessed independently by two radiologists and annotated as being diagnostic or nondiagnostic. These 171 T 2 WI were presented to the CNN algorithm and image quality (IQ) output of the algorithm was compared to that of two radiologists. There was concordance in IQ label between Reader 1 and CNN in 79% of cases and between Reader 2 and CNN in 73%. The sensitivity and the specificity of the CNN algorithm in identifying nondiagnostic IQ was 67% and 81% with respect to Reader 1 and 47% and 80% with respect to Reader 2. The negative predictive value of the algorithm for identifying nondiagnostic IQ was 94% and 86% (relative to Readers 1 and 2). We demonstrate a CNN algorithm that yields a high negative predictive value when screening for nondiagnostic T 2 WI of the liver. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:723-728. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture

    PubMed Central

    Meszlényi, Regina J.; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network. PMID:29089883

  13. Face recognition: a convolutional neural-network approach.

    PubMed

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  14. Resting State fMRI Functional Connectivity-Based Classification Using a Convolutional Neural Network Architecture.

    PubMed

    Meszlényi, Regina J; Buza, Krisztian; Vidnyánszky, Zoltán

    2017-01-01

    Machine learning techniques have become increasingly popular in the field of resting state fMRI (functional magnetic resonance imaging) network based classification. However, the application of convolutional networks has been proposed only very recently and has remained largely unexplored. In this paper we describe a convolutional neural network architecture for functional connectome classification called connectome-convolutional neural network (CCNN). Our results on simulated datasets and a publicly available dataset for amnestic mild cognitive impairment classification demonstrate that our CCNN model can efficiently distinguish between subject groups. We also show that the connectome-convolutional network is capable to combine information from diverse functional connectivity metrics and that models using a combination of different connectivity descriptors are able to outperform classifiers using only one metric. From this flexibility follows that our proposed CCNN model can be easily adapted to a wide range of connectome based classification or regression tasks, by varying which connectivity descriptor combinations are used to train the network.

  15. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  16. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  17. Electronic Structure of Energetic Molecules and Crystals Under Compression

    NASA Astrophysics Data System (ADS)

    Kay, Jeffrey

    Understanding how the electronic structure of energetic materials change under compression is important to elucidating mechanisms of shock-induced reactions and detonation. In this presentation, the electronic structure of prototypical energetic crystals are examined under high degrees of compression using ab initio quantum chemical calculations. The effects of compression on and interactions between the constituent molecules are examined in particular. The insights these results provide into previous experimental observations and theoretical predictions of energetic materials under high pressure are discussed. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Prehospital randomised assessment of a mechanical compression device in out-of-hospital cardiac arrest (PARAMEDIC): a pragmatic, cluster randomised trial and economic evaluation.

    PubMed

    Gates, Simon; Lall, Ranjit; Quinn, Tom; Deakin, Charles D; Cooke, Matthew W; Horton, Jessica; Lamb, Sarah E; Slowther, Anne-Marie; Woollard, Malcolm; Carson, Andy; Smyth, Mike; Wilson, Kate; Parcell, Garry; Rosser, Andrew; Whitfield, Richard; Williams, Amanda; Jones, Rebecca; Pocock, Helen; Brock, Nicola; Black, John Jm; Wright, John; Han, Kyee; Shaw, Gary; Blair, Laura; Marti, Joachim; Hulme, Claire; McCabe, Christopher; Nikolova, Silviya; Ferreira, Zenia; Perkins, Gavin D

    2017-03-01

    Mechanical chest compression devices may help to maintain high-quality cardiopulmonary resuscitation (CPR), but little evidence exists for their effectiveness. We evaluated whether or not the introduction of Lund University Cardiopulmonary Assistance System-2 (LUCAS-2; Jolife AB, Lund, Sweden) mechanical CPR into front-line emergency response vehicles would improve survival from out-of-hospital cardiac arrest (OHCA). Evaluation of the LUCAS-2 device as a routine ambulance service treatment for OHCA. Pragmatic, cluster randomised trial including adults with non-traumatic OHCA. Ambulance dispatch staff and those collecting the primary outcome were blind to treatment allocation. Blinding of the ambulance staff who delivered the interventions and reported initial response to treatment was not possible. We also conducted a health economic evaluation and a systematic review of all trials of out-of-hospital mechanical chest compression. Four UK ambulance services (West Midlands, North East England, Wales and South Central), comprising 91 urban and semiurban ambulance stations. Clusters were ambulance service vehicles, which were randomly assigned (approximately 1 : 2) to the LUCAS-2 device or manual CPR. Patients were included if they were in cardiac arrest in the out-of-hospital environment. Exclusions were patients with cardiac arrest as a result of trauma, with known or clinically apparent pregnancy, or aged < 18 years. Patients received LUCAS-2 mechanical chest compression or manual chest compressions according to the first trial vehicle to arrive on scene. Survival at 30 days following cardiac arrest; survival without significant neurological impairment [Cerebral Performance Category (CPC) score of 1 or 2]. We enrolled 4471 eligible patients (1652 assigned to the LUCAS-2 device and 2819 assigned to control) between 15 April 2010 and 10 June 2013. A total of 985 (60%) patients in the LUCAS-2 group received mechanical chest compression and 11 (< 1%) patients in the control group received LUCAS-2. In the intention-to-treat analysis, 30-day survival was similar in the LUCAS-2 (104/1652, 6.3%) and manual CPR groups [193/2819, 6.8%; adjusted odds ratio (OR) 0.86, 95% confidence interval (CI) 0.64 to 1.15]. Survival with a CPC score of 1 or 2 may have been worse in the LUCAS-2 group (adjusted OR 0.72, 95% CI 0.52 to 0.99). No serious adverse events were noted. The systematic review found no evidence of a survival advantage if mechanical chest compression was used. The health economic analysis showed that LUCAS-2 was dominated by manual chest compression. There was substantial non-compliance in the LUCAS-2 arm. For 272 out of 1652 patients (16.5%), mechanical chest compression was not used for reasons that would not occur in clinical practice. We addressed this issue by using complier average causal effect analyses. We attempted to measure CPR quality during the resuscitation attempts of trial participants, but were unable to do so. There was no evidence of improvement in 30-day survival with LUCAS-2 compared with manual compressions. Our systematic review of recent randomised trials did not suggest that survival or survival without significant disability may be improved by the use of mechanical chest compression. The use of mechanical chest compression for in-hospital cardiac arrest, and in specific circumstances (e.g. transport), has not yet been evaluated. Current Controlled Trials ISRCTN08233942. This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment ; Vol. 21, No. 11. See the NIHR Journals Library website for further project information.

  19. Performance Analysis of IEEE 802.11g TCM Waveforms Transmitted over a Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2007-06-01

    17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB

  20. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  1. Effects of Convoluted Divergent Flap Contouring on the Performance of a Fixed-Geometry Nonaxisymmetric Exhaust Nozzle

    NASA Technical Reports Server (NTRS)

    Asbury, Scott C.; Hunter, Craig A.

    1999-01-01

    An investigation was conducted in the model preparation area of the Langley 16-Foot Transonic Tunnel to determine the effects of convoluted divergent-flap contouring on the internal performance of a fixed-geometry, nonaxisymmetric, convergent-divergent exhaust nozzle. Testing was conducted at static conditions using a sub-scale nozzle model with one baseline and four convoluted configurations. All tests were conducted with no external flow at nozzle pressure ratios from 1.25 to approximately 9.50. Results indicate that baseline nozzle performance was dominated by unstable, shock-induced, boundary-layer separation at overexpanded conditions. Convoluted configurations were found to significantly reduce, and in some cases totally alleviate separation at overexpanded conditions. This result was attributed to the ability of convoluted contouring to energize and improve the condition of the nozzle boundary layer. Separation alleviation offers potential for installed nozzle aeropropulsive (thrust-minus-drag) performance benefits by reducing drag at forward flight speeds, even though this may reduce nozzle thrust ratio as much as 6.4% at off-design conditions. At on-design conditions, nozzle thrust ratio for the convoluted configurations ranged from 1% to 2.9% below the baseline configuration; this was a result of increased skin friction and oblique shock losses inside the nozzle.

  2. Review of passive-blind detection in digital video forgery based on sensing and imaging techniques

    NASA Astrophysics Data System (ADS)

    Tao, Junjie; Jia, Lili; You, Ying

    2016-01-01

    Advances in digital video compression and IP communication technologies raised new issues and challenges concerning the integrity and authenticity of surveillance videos. It is so important that the system should ensure that once recorded, the video cannot be altered; ensuring the audit trail is intact for evidential purposes. This paper gives an overview of passive techniques of Digital Video Forensics which are based on intrinsic fingerprints inherent in digital surveillance videos. In this paper, we performed a thorough research of literatures relevant to video manipulation detection methods which accomplish blind authentications without referring to any auxiliary information. We presents review of various existing methods in literature, and much more work is needed to be done in this field of video forensics based on video data analysis and observation of the surveillance systems.

  3. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  4. Blooming Artifact Reduction in Coronary Artery Calcification by A New De-blooming Algorithm: Initial Study.

    PubMed

    Li, Ping; Xu, Lei; Yang, Lin; Wang, Rui; Hsieh, Jiang; Sun, Zhonghua; Fan, Zhanming; Leipsic, Jonathon A

    2018-05-02

    The aim of this study was to investigate the use of de-blooming algorithm in coronary CT angiography (CCTA) for optimal evaluation of calcified plaques. Calcified plaques were simulated on a coronary vessel phantom and a cardiac motion phantom. Two convolution kernels, standard (STND) and high-definition standard (HD STND), were used for imaging reconstruction. A dedicated de-blooming algorithm was used for imaging processing. We found a smaller bias towards measurement of stenosis using the de-blooming algorithm (STND: bias 24.6% vs 15.0%, range 10.2% to 39.0% vs 4.0% to 25.9%; HD STND: bias 17.9% vs 11.0%, range 8.9% to 30.6% vs 0.5% to 21.5%). With use of de-blooming algorithm, specificity for diagnosing significant stenosis increased from 45.8% to 75.0% (STND), from 62.5% to 83.3% (HD STND); while positive predictive value (PPV) increased from 69.8% to 83.3% (STND), from 76.9% to 88.2% (HD STND). In the patient group, reduction in calcification volume was 48.1 ± 10.3%, reduction in coronary diameter stenosis over calcified plaque was 52.4 ± 24.2%. Our results suggest that the novel de-blooming algorithm could effectively decrease the blooming artifacts caused by coronary calcified plaques, and consequently improve diagnostic accuracy of CCTA in assessing coronary stenosis.

  5. An Intrinsically Digital Amplification Scheme for Hearing Aids

    NASA Astrophysics Data System (ADS)

    Blamey, Peter J.; Macfarlane, David S.; Steele, Brenton R.

    2005-12-01

    Results for linear and wide-dynamic range compression were compared with a new 64-channel digital amplification strategy in three separate studies. The new strategy addresses the requirements of the hearing aid user with efficient computations on an open-platform digital signal processor (DSP). The new amplification strategy is not modeled on prior analog strategies like compression and linear amplification, but uses statistical analysis of the signal to optimize the output dynamic range in each frequency band independently. Using the open-platform DSP processor also provided the opportunity for blind trial comparisons of the different processing schemes in BTE and ITE devices of a high commercial standard. The speech perception scores and questionnaire results show that it is possible to provide improved audibility for sound in many narrow frequency bands while simultaneously improving comfort, speech intelligibility in noise, and sound quality.

  6. Electrochemical force microscopy

    DOEpatents

    Kalinin, Sergei V.; Jesse, Stephen; Collins, Liam F.; Rodriguez, Brian J.

    2017-01-10

    A system and method for electrochemical force microscopy are provided. The system and method are based on a multidimensional detection scheme that is sensitive to forces experienced by a biased electrode in a solution. The multidimensional approach allows separation of fast processes, such as double layer charging, and charge relaxation, and slow processes, such as diffusion and faradaic reactions, as well as capturing the bias dependence of the response. The time-resolved and bias measurements can also allow probing both linear (small bias range) and non-linear (large bias range) electrochemical regimes and potentially the de-convolution of charge dynamics and diffusion processes from steric effects and electrochemical reactivity.

  7. Deconvolution Method on OSL Curves from ZrO2 Irradiated by Beta and UV Radiations

    NASA Astrophysics Data System (ADS)

    Rivera, T.; Kitis, G.; Azorín, J.; Furetta, C.

    This paper reports the optically stimulated luminescent (OSL) response of ZrO2 to beta and ultraviolet radiations in order to investigate the potential use of this material as a radiation dosimeter. The experimentally obtained OSL decay curves were analyzed using the computerized curve de-convolution (CCD) method. It was found that the OSL curve structure, for the short (practical) illumination time used, consists of three first order components. The individual OSL dose response behavior of each component was found. The values of the time at the OSL peak maximum and the decay constant of each component were also estimated.

  8. Centrifugal Compressors, Flow Phenomena and Performance.

    DTIC Science & Technology

    1980-11-01

    of the diffuser indicate that rotating nonuniformities (rotating stall) may be observed at certain operating conditions. The last paper in this...utilis6 en 6tage isol6, sans canal de retour, ce compresseur peut fournir un taux de compression TT = 5,3 au r~frig~rant 12 (clest-A-dire T = 5,6 A lair

  9. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  10. Pharmaceutical Product Lead Optimization for Better In vivo Bioequivalence Performance: A case study of Diclofenac Sodium Extended Release Matrix Tablets.

    PubMed

    Shahiwala, Aliasgar; Zarar, Aisha

    2018-01-01

    In order to prove the validity of a new formulation, a considerable amount of effort is required to study bioequivalence, which not only increases the burden of carrying out a number of bioequivalence studies but also eventually increases the cost of the optimization process. The aim of the present study was to develop sustained release matrix tablets containing diclofenac sodium using natural polymers and to demonstrate step by step process of product development till the prediction of in vivo marketed product equivalence of the developed product. Different batches of tablets were prepared by direct compression. In vitro drug release studies were performed as per USP. The drug release data were assessed using model-dependent, modelindependent and convolution approaches. Drug release profiles showed that extended release action were in the following order: Gum Tragacanth > Sodium Alginate > Gum Acacia. Amongst the different batches prepared, only F1 and F8 passed the USP criteria of drug release. Developed formulas were found to fit Higuchi kinetics model with Fickian (case I) diffusion-mediated release mechanism. Model- independent kinetics confirmed that total of four batches were passed depending on the similarity factors based on the comparison with the marketed Diclofenac. The results of in vivo predictive convolution model indicated that predicted AUC, Cmax and Tmax values for batch F8 were similar to that of marketed product. This study provides simple yet effective outline of pharmaceutical product development process that will minimize the formulation development trials and maximize the product success in bioequivalence studies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. A separable two-dimensional discrete Hartley transform

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Poirson, A.

    1985-01-01

    Bracewell has proposed the Discrete Hartley Transform (DHT) as a substitute for the Discrete Fourier Transform (DFT), particularly as a means of convolution. Here, it is shown that the most natural extension of the DHT to two dimensions fails to be separate in the two dimensions, and is therefore inefficient. An alternative separable form is considered, corresponding convolution theorem is derived. That the DHT is unlikely to provide faster convolution than the DFT is also discussed.

  12. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  13. Reconfigurable Gabor Filter For Fingerprint Recognition Using FPGA Verilog

    NASA Astrophysics Data System (ADS)

    Rosshidi, H. T.; Hadi, A. R.

    2009-06-01

    This paper present the implementations of Gabor filter for fingerprint recognition using Verilog HDL. This work demonstrates the application of Gabor Filter technique to enhance the fingerprint image. The incoming signal in form of image pixel will be filter out or convolute by the Gabor filter to define the ridge and valley regions of fingerprint. This is done with the application of a real time convolve based on Field Programmable Gate Array (FPGA) to perform the convolution operation. The main characteristic of the proposed approach are the usage of memory to store the incoming image pixel and the coefficient of the Gabor filter before the convolution matrix take place. The result was the signal convoluted with the Gabor coefficient.

  14. Convolutional neural network for road extraction

    NASA Astrophysics Data System (ADS)

    Li, Junping; Ding, Yazhou; Feng, Fajie; Xiong, Baoyu; Cui, Weihong

    2017-11-01

    In this paper, the convolution neural network with large block input and small block output was used to extract road. To reflect the complex road characteristics in the study area, a deep convolution neural network VGG19 was conducted for road extraction. Based on the analysis of the characteristics of different sizes of input block, output block and the extraction effect, the votes of deep convolutional neural networks was used as the final road prediction. The study image was from GF-2 panchromatic and multi-spectral fusion in Yinchuan. The precision of road extraction was 91%. The experiments showed that model averaging can improve the accuracy to some extent. At the same time, this paper gave some advice about the choice of input block size and output block size.

  15. Symmetric convolution of asymmetric multidimensional sequences using discrete trigonometric transforms.

    PubMed

    Foltz, T M; Welsh, B M

    1999-01-01

    This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.

  16. Molecular graph convolutions: moving beyond fingerprints

    PubMed Central

    Kearnes, Steven; McCloskey, Kevin; Berndl, Marc; Pande, Vijay; Riley, Patrick

    2016-01-01

    Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement. PMID:27558503

  17. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  18. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    NASA Technical Reports Server (NTRS)

    Lee, L. N.

    1976-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  19. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  20. Une urgence médico-chirurgicale rare: l’abcès épidural rachidien (à propos de 03 cas)

    PubMed Central

    Saqui, Abderrazzak El; Aggouri, Mohamed; Benzagmout, Mohamed; Chakour, Khalid; Chaoui, Mohamed El Faiz

    2017-01-01

    Les infections de l’espace épidural sont de mieux en mieux connues grâce au développement de la neurochirurgie, notamment l’IRM. Les abcès épiduraux rachidiens représentent une pathologie rare mais éminemment grave sur le plan fonctionnel, avec un risque vital potentiel. Nous rapportons trois cas d’abcès épidural rachidien tous diagnostiqués chez des patients de sexe masculin, le premier âgé de 52 ans le deuxième de 57 ans et le troisième de 63ans. Deux patients ont été admis aux urgences neurochirurgicales pour un tableau de compression médullaire lente évoluant dans un contexte infectieux, et le dernier se plaignait d’une sciatique S1 droite rebelle au traitement avec des fuites urinaires. Aucune porte d’entrée n’a été identifiée dans le bilan initial. Tous les patients ont été opérés par voie d’abord postérieure avec décompression médullaire/radiculaire et évacuation de l’abcès épidural. L’étude bactériologique a trouvé un germe pyogène justifiant une antibiothérapie adaptée dans les trois cas. L’évolution a été favorable dans deux cas. Par contre un patient est décédé trois jours en post-opératoire par un sepsis sévère. PMID:28533868

  1. Effectiveness of De Qi during acupuncture for the treatment of tinnitus: study protocol for a randomized controlled trial.

    PubMed

    Xie, Hui; Li, Xinrong; Lai, Jiaqin; Zhou, Yanan; Wang, Caiying; Liang, Jiao

    2014-10-15

    Acupuncture has been used in China to treat tinnitus for a long time. There is debate as to whether or not De Qi is a key factor in achieving the efficacy of acupuncture. However, there is no sufficient evidence obtained from randomized controlled trials to confirm the role of De Qi in the treatment of acupuncture for tinnitus. This study aims to identify the effect of De Qi for patients who receive acupuncture to alleviate tinnitus by a prospective, double-blind, randomized, sham-controlled trial. This study compares two acupuncture groups (with or without manipulation) in 292 patients with a history of subjective tinnitus. The trial will be conducted in the Teaching Hospital of Chengdu University of Traditional Chinese Medicine. In the study, the patients will be randomly assigned into two groups according to a computer-generated randomization list and assessed prior to treatment. Then, they will receive 5 daily sessions of 30 minutes each time for 4 consecutive weeks and undergo a 12-week follow-up phase. The administration of acupuncture follows the guidelines for clinical research on acupuncture (WHO Regional Publication, Western Pacific Series Number 15, 1995), and is performed double-blind by physicians well-trained in acupuncture. The measures of outcome include the subjective symptoms scores and quantitative sensations of De Qi evaluated by Visual Analog Scales (VAS) and the Chinese version of the 'modified' Massachusetts General Hospital Acupuncture Sensation Scale (C-MMASS). Furthermore, adverse events are recorded and analyzed. If any subjects are withdrawn from the trial, intention-to-treat analysis (ITT) and per-protocol (PP) analysis will be performed. The key features of this trial include the randomization procedures, large sample and the standardized protocol to evaluate De Qi qualitatively and quantitatively in the treatment of acupuncture for tinnitus. The trial will be the first study with a high evidence level in China to assess the efficacy of De Qi in the treatment of tinnitus in a randomized, double-blind, sham-controlled manner. Chinese Clinical Trial Registry: ChiCTR-TRC-14004720 (6 May 2014).

  2. Restoration of recto-verso colour documents using correlated component analysis

    NASA Astrophysics Data System (ADS)

    Tonazzini, Anna; Bedini, Luigi

    2013-12-01

    In this article, we consider the problem of removing see-through interferences from pairs of recto-verso documents acquired either in grayscale or RGB modality. The see-through effect is a typical degradation of historical and archival documents or manuscripts, and is caused by transparency or seeping of ink from the reverse side of the page. We formulate the problem as one of separating two individual texts, overlapped in the recto and verso maps of the colour channels through a linear convolutional mixing operator, where the mixing coefficients are unknown, while the blur kernels are assumed known a priori or estimated off-line. We exploit statistical techniques of blind source separation to estimate both the unknown model parameters and the ideal, uncorrupted images of the two document sides. We show that recently proposed correlated component analysis techniques overcome the already satisfactory performance of independent component analysis techniques and colour decorrelation, when the two texts are even sensibly correlated.

  3. Insights into aquifer vulnerability and potential recharge zones from the borehole response to barometric pressure changes

    NASA Astrophysics Data System (ADS)

    El Araby, Mahmoud; Odling, Noelle; Clark, Roger; West, Jared

    2010-05-01

    Borehole water levels fluctuate in response to deformation of the surrounding aquifer caused by surface loading due to barometric pressure or strain caused by Earth and ocean tides. The magnitude and nature of this response mainly depend on the hydraulic properties of the aquifer and overlying units and borehole design. Thus water level responses reflect the effectiveness of a confining unit as a protective layer against aquifer contamination (and therefore groundwater vulnerability) and to potential aquifer recharge/discharge zones. In this study, time series of borehole water levels and barometric pressure are being investigated using time series analysis and signal processing techniques with the aim of developing a methodology for assessing recharge/discharge distribution and groundwater vulnerability in the confined/semi-confined part of the Chalk aquifer in East Yorkshire, UK. The chalk aquifer in East Yorkshire is an important source for industrial and domestic water supply. The aquifer water quality is threatened by surface pollution particularly by nitrates from agricultural fertilizers. The confined/semi-confined part of this aquifer is covered by various types of superficial deposits resulting in a wide range of the aquifer's degree of confinement. A number of boreholes have been selected for monitoring to cover all these various types of confining units. Automatic pressure transducers are installed to record water levels and barometric pressure measurements at each borehole on 15 minutes recording intervals. In strictly confined aquifers, borehole water level response to barometric pressure is an un-drained instantaneous response and is a constant fraction of the barometric pressure changes. This static confined constant is called the barometric efficiency which can be estimated simply by the slope of a regression plot of water levels versus barometric pressure. However, in the semi confined aquifer case this response is lagged due to water movement between the aquifer and the confining layer. In this case the static constant barometric efficiency is not applicable and the response is represented by a barometric response function which reflects the timing and frequency of the barometric pressure loading. In this study, the barometric response function is estimated using de-convolution techniques both in the time domain (least squares regression de-convolution) and in the frequency domain (discrete Fourier transform de-convolution). In order to estimate the barometric response function, borehole water level fluctuations due to factors other than barometric pressure should be removed (de-trended) as otherwise they will mask the response relation of interest. It is shown from the collected borehole data records that the main four factors other than barometric pressure contribute to borehole water level fluctuations. These are the rainfall recharge, Earth tides, sea tides and pumping activities close to the borehole location. Due to the highly variable nature of the UK weather, rainfall recharge shows a wide variation throughout the winter and summer seasons. This gives a complicated recharge signal over a wide range of frequencies which must be de-trended from the borehole water level data in order to estimate the barometric response function. Methods for removing this recharge signal are developed and discussed. Earth tides are calculated theoretically at each borehole location taking into account oceanic loading effects. Ocean tide effects on water levels fluctuations are clear for the boreholes located close to the coast. A Matlab code has been designed to calculate and de-trend the periodic fluctuations in borehole water levels due to Earth and ocean tides using the least squares regression technique based on a sum of sine and cosine fitting model functions. The program results have been confirmed using spectral analysis techniques.

  4. Mesure optimale de tilt et déplacement d'un faisceau gaussien

    NASA Astrophysics Data System (ADS)

    Delaubert, V.; Treps, N.; Fabre, C.; Harb, C.; Lam, P. K.; Bachor, H.

    2006-10-01

    Nous réalisons une expérience de mesure optimale de petits déplacements d'un faisceau gaussien TEM{00}, basée sur une détection homodyne employant un oscillateur local TEM{10}. Nous montrons une amélioration de 56% du signal détecté par rapport à une détection à deux zones. Ce nouveau dispositif permet également de mesurer de façon optimale de petites valeurs de tilt, la quantité conjuguée du déplacement. Enfin, nous montrons que la compression du mode TEM{10} du faisceau incident permet une mesure de déplacement au delà de la limite quantique standard.

  5. Self-synchronization for spread spectrum audio watermarks after time scale modification

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Sharma, Gaurav

    2014-02-01

    De-synchronizing operations such as insertion, deletion, and warping pose significant challenges for watermarking. Because these operations are not typical for classical communications, watermarking techniques such as spread spectrum can perform poorly. Conversely, specialized synchronization solutions can be challenging to analyze/ optimize. This paper addresses desynchronization for blind spread spectrum watermarks, detected without reference to any unmodified signal, using the robustness properties of short blocks. Synchronization relies on dynamic time warping to search over block alignments to find a sequence with maximum correlation to the watermark. This differs from synchronization schemes that must first locate invariant features of the original signal, or estimate and reverse desynchronization before detection. Without these extra synchronization steps, analysis for the proposed scheme builds on classical SS concepts and allows characterizes the relationship between the size of search space (number of detection alignment tests) and intrinsic robustness (continuous search space region covered by each individual detection test). The critical metrics that determine the search space, robustness, and performance are: time-frequency resolution of the watermarking transform, and blocklength resolution of the alignment. Simultaneous robustness to (a) MP3 compression, (b) insertion/deletion, and (c) time-scale modification is also demonstrated for a practical audio watermarking scheme developed in the proposed framework.

  6. A digital pixel cell for address event representation image convolution processing

    NASA Astrophysics Data System (ADS)

    Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.

  7. Software Communications Architecture (SCA) Compliant Software Defined Radio Design for IEEE 802.16 Wirelessman-OFDMTM Transceiver

    DTIC Science & Technology

    2006-12-01

    Convolutional encoder of rate 1/2 (From [10]). Table 3 shows the puncturing patterns used to derive the different code rates . X precedes Y in the order... convolutional code with puncturing configuration (From [10])......11 Table 4. Mandatory channel coding per modulation (From [10...a concatenation of a Reed– Solomon outer code and a rate -adjustable convolutional inner code . At the transmitter, data shall first be encoded with

  8. Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal

    DTIC Science & Technology

    2004-03-01

    Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half

  9. Design and System Implications of a Family of Wideband HF Data Waveforms

    DTIC Science & Technology

    2010-09-01

    code rates (i.e. 8/9, 9/10) will be used to attain the highest data rates for surface wave links. Very high puncturing of convolutional codes can...Communication Links”, Edition 1, North Atlantic Treaty Organization, 2009. [14] Yasuda, Y., Kashiki, K., Hirata, Y. “High- Rate Punctured Convolutional Codes ...length 7 convolutional code that has been used for over two decades in 110A. In addition, repetition coding and puncturing was

  10. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  11. Further Developments in the Communication Link and Error Analysis (CLEAN) Simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1995-01-01

    During the period 1 July 1993 - 30 June 1994, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed. Many of these were reported in the Semi-Annual report dated December 1993 which has been included in this report in Appendix A. Since December 1993, a number of additional modules have been added involving Unit-Memory Convolutional codes (UMC). These are: (1) Unit-Memory Convolutional Encoder module (UMCEncd); (2) Hard decision Unit-Memory Convolutional Decoder using the Viterbi decoding algorithm (VitUMC); and (3) a number of utility modules designed to investigate the performance of LTMC's such as LTMC column distance function (UMCdc), UMC free distance function (UMCdfree), UMC row distance function (UMCdr), and UMC Transformation (UMCTrans). The study of UMC's was driven, in part, by the desire to investigate high-rate convolutional codes which are better suited as inner codes for a concatenated coding scheme. A number of high-rate LTMC's were found which are good candidates for inner codes. Besides the further developments of the simulation, a study was performed to construct a table of the best known Unit-Memory Convolutional codes. Finally, a preliminary study of the usefulness of the Periodic Convolutional Interleaver (PCI) was completed and documented in a Technical note dated March 17, 1994. This technical note has also been included in this final report.

  12. Cost Structure and Life Cycle Cost (LCC) for Military Systems (structures de couts globaux de possession (LCC) pour systemes militaires)

    DTIC Science & Technology

    2003-06-01

    variables. Dans le plan (p = 2), la méthode de régression linéaire ajuste au nuage de points une droite qui minimise la somme des écarts au carré entre...cohérence de la définition du produit. Par exemple, pour un blindé, il faut valider l ’ « harmonie » du trio masse, puissance du moteur et vitesse maximale. Il...en condition opérationnelle. De façon plus générale, la DGA devrait s’orienter vers l ‘élaboration et l’acquisition de modèle d’estimation de coûts du

  13. Generalised Category Attack—Improving Histogram-Based Attack on JPEG LSB Embedding

    NASA Astrophysics Data System (ADS)

    Lee, Kwangsoo; Westfeld, Andreas; Lee, Sangjin

    We present a generalised and improved version of the category attack on LSB steganography in JPEG images with straddled embedding path. It detects more reliably low embedding rates and is also less disturbed by double compressed images. The proposed methods are evaluated on several thousand images. The results are compared to both recent blind and specific attacks for JPEG embedding. The proposed attack permits a more reliable detection, although it is based on first order statistics only. Its simple structure makes it very fast.

  14. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    PubMed

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Towards dropout training for convolutional neural networks.

    PubMed

    Wu, Haibing; Gu, Xiaodong

    2015-11-01

    Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system

    NASA Astrophysics Data System (ADS)

    Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan

    2018-01-01

    This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.

  17. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1992-01-01

    Worked performed during the reporting period is summarized. Construction of robustly good trellis codes for use with sequential decoding was developed. The robustly good trellis codes provide a much better trade off between free distance and distance profile. The unequal error protection capabilities of convolutional codes was studied. The problem of finding good large constraint length, low rate convolutional codes for deep space applications is investigated. A formula for computing the free distance of 1/n convolutional codes was discovered. Double memory (DM) codes, codes with two memory units per unit bit position, were studied; a search for optimal DM codes is being conducted. An algorithm for constructing convolutional codes from a given quasi-cyclic code was developed. Papers based on the above work are included in the appendix.

  18. Efficient airport detection using region-based fully convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao

    2018-04-01

    This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.

  19. A Further Compilation of Compressible Boundary Layer Data with a Survey of Turbulence Data,

    DTIC Science & Technology

    1981-11-01

    publication is once more funded by AGARO and we thank R.H. Rollins, executive of the Fluid Dynamics Panel, for his help and encouragement. We are specially...dans une interaction onde de choc - couche limite. Thise Universit de Poitiers. Lee R.E., Yanta W.J., Leonas A.C. 1969 Velocity profile, skin friction

  20. The Axial Compressive Strength of High Performance Polymer Fibers

    DTIC Science & Technology

    1985-03-01

    consists of axially oriented graphitic microfibrils that have the strong and stiff graphite crystal basal plane oriented parallel to the long axis of the... microfibrils [3,4]. The synthetic rigid polymer fibers are represented by only one commercial material: the PPTA fibers produced by E.I. DuPont de...and/or microfibrils is presented. A potential energy balance analysis is used to calculate critical stresses for the onset of compressive buckling

  1. Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing-aid processing for children and adults with hearing loss

    PubMed Central

    Brennan, Marc A.; McCreery, Ryan; Kopun, Judy; Hoover, Brenda; Alexander, Joshua; Lewis, Dawna; Stelmachowicz, Patricia G.

    2014-01-01

    Background Preference for speech and music processed with nonlinear frequency compression and two controls (restricted and extended bandwidth hearing-aid processing) was examined in adults and children with hearing loss. Purpose Determine if stimulus type (music, sentences), age (children, adults) and degree of hearing loss influence listener preference for nonlinear frequency compression, restricted bandwidth and extended bandwidth. Research Design Within-subject, quasi-experimental study. Using a round-robin procedure, participants listened to amplified stimuli that were 1) frequency-lowered using nonlinear frequency compression, 2) low-pass filtered at 5 kHz to simulate the restricted bandwidth of conventional hearing aid processing, or 3) low-pass filtered at 11 kHz to simulate extended bandwidth amplification. The examiner and participants were blinded to the type of processing. Using a two-alternative forced-choice task, participants selected the preferred music or sentence passage. Study Sample Sixteen children (8–16 years) and 16 adults (19–65 years) with mild-to-severe sensorineural hearing loss. Intervention All subjects listened to speech and music processed using a hearing-aid simulator fit to the Desired Sensation Level algorithm v.5.0a (Scollie et al, 2005). Results Children and adults did not differ in their preferences. For speech, participants preferred extended bandwidth to both nonlinear frequency compression and restricted bandwidth. Participants also preferred nonlinear frequency compression to restricted bandwidth. Preference was not related to degree of hearing loss. For music, listeners did not show a preference. However, participants with greater hearing loss preferred nonlinear frequency compression to restricted bandwidth more than participants with less hearing loss. Conversely, participants with greater hearing loss were less likely to prefer extended bandwidth to restricted bandwidth. Conclusion Both age groups preferred access to high frequency sounds, as demonstrated by their preference for either the extended bandwidth or nonlinear frequency compression conditions over the restricted bandwidth condition. Preference for extended bandwidth can be limited for those with greater degrees of hearing loss, but participants with greater hearing loss may be more likely to prefer nonlinear frequency compression. Further investigation using participants with more severe hearing loss may be warranted. PMID:25514451

  2. Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.

    PubMed

    Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin

    2017-08-29

    This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.

  3. The Application of Virtex-II Pro FPGA in High-Speed Image Processing Technology of Robot Vision Sensor

    NASA Astrophysics Data System (ADS)

    Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.

    2006-10-01

    The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.

  4. FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar

    NASA Astrophysics Data System (ADS)

    Azim, Noor ul; Jun, Wang

    2016-11-01

    Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.

  5. Collaborative identification method for sea battlefield target based on deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Zheng, Guangdi; Pan, Mingbo; Liu, Wei; Wu, Xuetong

    2018-03-01

    The target identification of the sea battlefield is the prerequisite for the judgment of the enemy in the modern naval battle. In this paper, a collaborative identification method based on convolution neural network is proposed to identify the typical targets of sea battlefields. Different from the traditional single-input/single-output identification method, the proposed method constructs a multi-input/single-output co-identification architecture based on optimized convolution neural network and weighted D-S evidence theory. The simulation results show that

  6. A convolution model for computing the far-field directivity of a parametric loudspeaker array.

    PubMed

    Shi, Chuang; Kajikawa, Yoshinobu

    2015-02-01

    This paper describes a method to compute the far-field directivity of a parametric loudspeaker array (PLA), whereby the steerable parametric loudspeaker can be implemented when phased array techniques are applied. The convolution of the product directivity and the Westervelt's directivity is suggested, substituting for the past practice of using the product directivity only. Computed directivity of a PLA using the proposed convolution model achieves significant improvement in agreement to measured directivity at a negligible computational cost.

  7. Laser Welding Process Monitoring Systems: Advanced Signal Analysis for Quality Assurance

    NASA Astrophysics Data System (ADS)

    D'Angelo, Giuseppe

    Laser material processing today is widely used in industry. Especially laser welding became one of the key-technologies, e. g., for the automotive sector. This is due to the improvement and development of new laser sources and the increasing knowledge gained at countless scientific research projects. Nevertheless, it is still not possible to use the full potential of this technology. Therefore, the introduction and application of quality-assuring systems is required. For a long time, the statement "the best sensor is no sensor" was often heard. Today, a change of paradigm can be observed. On the one hand, ISO 9000 and other by law enforced regulations have led to the understanding that quality monitoring is an essential tool in modern manufacturing and necessary in order to keep production results in deterministic boundaries. On the other hand, rising quality requirements not only set higher and higher requirements for the process technology but also demand qualityassurance measures which ensure the reliable recognition of process faults. As a result, there is a need for reliable online detection and correction of welding faults by means of an in-process monitoring. The chapter describes an advanced signals analysis technique to extract information from signals detected, during the laser welding process, by optical sensors. The technique is based on the method of reassignment which was first applied to the spectrogram by Kodera, Gendrin and de Villedary22,23 and later generalized to any bilinear time-frequency representation by Auger and Flandrin.24 Key to the method is a nonlinear convolution where the value of the convolution is not placed at the center of the convolution kernel but rather reassigned to the center of mass of the function within the kernel. The resulting reassigned representation yields significantly improved components localization. We compare the proposed time-frequency distributions by analyzing signals detected during the laser welding of tailored blanks, demonstrating the advantages of the reassigned representation, giving practical applicability to the proposed method.

  8. Sequential compression pump effect on hypotension due to spinal anesthesia for cesarean section: A double blind clinical trial.

    PubMed

    Zadeh, Fatemeh Javaherforoosh; Alqozat, Mostafa; Zadeh, Reza Akhond

    2017-05-01

    Spinal anesthesia (SA) is a standard technique for cesarean section. Hypotension presents an incident of 80-85% after SA in pregnant women. To determine the effect of intermittent pneumatic compression of lower limbs on declining spinal anesthesia induced hypotension during cesarean section. This double-blind clinical prospective study was conducted on 76 non-laboring parturient patients, aged 18-45 years, with the American Society of Anesthesiologist physical status I or II who were scheduled for elective cesarean section at Razi Hospital, Ahvaz, Iran from December 21, 2015 to January 20, 2016. Patients were divided into treatment mechanical pump (Group M) or control group (Group C) with simple random sampling. Fetal presentation, birth weight, Apgar at 1 and 5 min, time taken for pre-hydration (min), pre-hydration to the administration of spinal anesthesia (min), initiation of spinal to the delivery (min) and total volume of intravenous fluids, total dose of ephedrine and metoclopramide were recorded. Data were analyzed by SPSS version 19, using repeated measures of ANOVA and Chi square test. Heart rate, MPA, DAP and SAP changes were significantly higher in off-pump group in the baseline and 1st-minute (p<0.05), and in the other times, this change was significantly different with control groups. This research showed the suitability of the use of Sequential Compression Device (SCD) in reducing hypotension after spinal anesthesia for cesarean section, also this method can cause reducing vasopressor dosage for increased blood pressure, but the approval of its effectiveness requires repetition of the study with a larger sample size. The trial was registered at the Iranian Registry of Clinical Trials (http://www.irct.ir) with the IRCT ID: IRCT2015011217742N3. The authors received no financial support for the research, authorship, and/or publication of this article.

  9. Sequential compression pump effect on hypotension due to spinal anesthesia for cesarean section: A double blind clinical trial

    PubMed Central

    Zadeh, Fatemeh Javaherforoosh; Alqozat, Mostafa; Zadeh, Reza Akhond

    2017-01-01

    Background Spinal anesthesia (SA) is a standard technique for cesarean section. Hypotension presents an incident of 80–85% after SA in pregnant women. Objective To determine the effect of intermittent pneumatic compression of lower limbs on declining spinal anesthesia induced hypotension during cesarean section. Methods This double-blind clinical prospective study was conducted on 76 non-laboring parturient patients, aged 18–45 years, with the American Society of Anesthesiologist physical status I or II who were scheduled for elective cesarean section at Razi Hospital, Ahvaz, Iran from December 21, 2015 to January 20, 2016. Patients were divided into treatment mechanical pump (Group M) or control group (Group C) with simple random sampling. Fetal presentation, birth weight, Apgar at 1 and 5 min, time taken for pre-hydration (min), pre-hydration to the administration of spinal anesthesia (min), initiation of spinal to the delivery (min) and total volume of intravenous fluids, total dose of ephedrine and metoclopramide were recorded. Data were analyzed by SPSS version 19, using repeated measures of ANOVA and Chi square test. Results Heart rate, MPA, DAP and SAP changes were significantly higher in off-pump group in the baseline and 1st-minute (p<0.05), and in the other times, this change was significantly different with control groups. Conclusion This research showed the suitability of the use of Sequential Compression Device (SCD) in reducing hypotension after spinal anesthesia for cesarean section, also this method can cause reducing vasopressor dosage for increased blood pressure, but the approval of its effectiveness requires repetition of the study with a larger sample size. Trial registration The trial was registered at the Iranian Registry of Clinical Trials (http://www.irct.ir) with the IRCT ID: IRCT2015011217742N3. Funding The authors received no financial support for the research, authorship, and/or publication of this article. PMID:28713516

  10. A stratigraphy fieldtrip for people with visual impairment

    NASA Astrophysics Data System (ADS)

    Gomez-Heras, Miguel; Gonzalez-Acebron, Laura; Muñoz-Garcia, Belen; Garcia-Frank, Alejandra; Fesharaki, Omid

    2017-04-01

    This communication presents how a stratigraphy fieldtrip adapted to people with visual impairment was prepared and carried out. This fieldtrip aimed to promote scientific knowledge on Earth sciences to people with visual impairment and to inspire Earth scientists to take into account the needs of people with disabilities when designing public engagement activities. To do this, the theme chosen for the fieldtrip was the importance of sedimentary rocks shaping the Earth and what information can one extract from observing sedimentary structures. The Triassic outcrops of Riba de Santiuste (Guadalajara, Spain) were observed during this fieldtrip. The expected learning outcomes were: a) understanding what are sedimentary rocks, how they are formed and how they fold and crop out, b) knowing what is a sedimentary structure and recognising some of them and c) be able to make inferences of the sedimentary environment from certain sedimentary structures. The fieldtrip was prepared, through the NGO "Science without Barriers" together with the Madrid delegation of the National Association for Spanish Blind People (ONCE-Madrid). ONCE-Madrid was responsible of advertising this activity as a part of their yearly cultural program to its affiliate. A preparatory fieldtrip was carried out to test the teaching methodology and to make an appropriate risk assessment. This was made together with the responsible of the Culture Area of ONCE-Madrid and two blind people. The involvement of end-users in the preparation of activities is in the core of the European Disability Forum motto: "Nothing about us without us". A crucial aspect of the site was accessibility. In terms of perambulatory accessibility of outcrops the site is excellent and suitable to some extent for end-users regardless of their physical fitness. The fieldtrip itself took place on October 15th 2016 and 30 people with and without visual disability attended. In addition to overall observations and explanations of strata and stratification, five types of sedimentary structures were observed in detail: Grain size differences and its meaning in terms of energy of the sedimentary environment, plant roots bioturbation traces, flute casts, ripples and convolute stratification. An introduction to the fieldtrip was available in Braille, as well as maps and figures in relief. A 3D plaster model representing the whole outcrop was used to give an overall view of the area as it was noted during the preparatory fieldtrip that totally blind people with no geological background had problems "zooming out", i.e. imagining the whole geological structure from detailed manipulation of strata. The feedback of the majority of the attendants to the fieldtrip was very enthusiastic. They highlighted the suitability of the activities and materials, perceived the fieldtrip as an enjoyable learning experience and met to some extent the expected learning outcomes. It is noteworthy that the fieldtrip was positively perceived positively by attendants with and without visual disability. This fieldtrip was possible thanks to a European Geosciences Union Public Outreach Grant

  11. Sight-threatening optic neuropathy is associated with paranasal lymphoma

    PubMed Central

    Hayashi, Takahiko; Watanabe, Ken; Tsuura, Yukio; Tsuji, Gengo; Koyama, Shingo; Yoshigi, Jun; Hirata, Naoko; Yamane, Shin; Iizima, Yasuhito; Toyota, Shigeo; Takeuchi, Satoshi

    2010-01-01

    Malignant lymphoma around the orbit is very rare. We present a rare case of optic neuropathy caused by lymphoma. A 61-year-old Japanese woman was referred to our hospital for evaluation of idiopathic optic neuropathy affecting her right eye. The patient was treated with steroid pulse therapy (methyl-predonisolone 1 g daily for 3 days) with a presumed diagnosis of idiopathic optic neuritis. After she had been switched to oral steroid therapy, endoscopic sinus surgery had been performed, which revealed diffuse large B cell lymphoma of the ethmoidal sinus. Although R-CHOP therapy was immediately started, prolonged optic nerve compression resulted in irreversible blindness. Accordingly, patients with suspected idiopathic optic neuritis should be carefully assessed when they show a poor response, and imaging of the orbits and brain should always be done for initial diagnosis because they may have compression by a tumor. PMID:20390034

  12. Current progress in multiple-image blind demixing algorithms

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    2000-06-01

    Imagery edges occur naturally in human visual systems as a consequence of redundancy reduction towards `sparse and orthogonality feature maps,' which have been recently derived from the maximum entropy information-theoretical first principle of artificial neural networks. After a brief match review of such an Independent Component Analysis or Blind Source Separation of edge maps, we explore the de- mixing condition for more than two imagery objects recognizable by an intelligent pair of cameras with memory in a time-multiplex fashion.

  13. Venous leg ulcer healing with electric stimulation therapy: a pilot randomised controlled trial.

    PubMed

    Miller, C; McGuiness, W; Wilson, S; Cooper, K; Swanson, T; Rooney, D; Piller, N; Woodward, M

    2017-03-02

    Compression therapy is a gold standard treatment to promote venous leg ulcer (VLU) healing. Concordance with compression therapy is, however, often sub-optimal. The aim of this study was to evaluate the effectiveness of electric stimulation therapy (EST) to facilitate healing of VLUs among people who do not use moderate-to-high levels of compression (>25 mmHg). A pilot multicentre, single-blinded randomised controlled trial was conducted. Participants were randomised (2:1) to the intervention group or a control group where EST or a sham device was used 4 times daily for 20 minutes per session. Participants were monitored fortnightly for eight weeks. The primary outcome measure was percentage of area (wound size) change. In the 23 patients recruited, an average redution in wound size of 23.15% (standard deviation [SD]: 61.23) was observed for the control group compared with 32.67 % (SD: 42.54) for the intervention. A moderate effect size favouring the intervention group was detected from univariate [F(1,18)=1.588, p=0.224, partial eta squared=0.081] and multivariate repeated measures [F(1,18)=2.053, p=0.169, partial eta squared=0.102] analyses. The pilot study was not powered to detect statistical significance, however, the difference in healing outcomes are encouraging. EST may be an effective adjunct treatment among patients who have experienced difficulty adhering to moderate-to-high levels of compression therapy.

  14. Evaluation of an impedance threshold device in patients receiving active compression-decompression cardiopulmonary resuscitation for out of hospital cardiac arrest.

    PubMed

    Plaisance, Patrick; Lurie, Keith G; Vicaut, Eric; Martin, Dominique; Gueugniaud, Pierre-Yves; Petit, Jean-Luc; Payen, Didier

    2004-06-01

    The purpose of this multicentre clinical randomized controlled blinded prospective trial was to determine whether an inspiratory impedance threshold device (ITD), when used in combination with active compression-decompression (ACD) cardiopulmonary resuscitation (CPR), would improve survival rates in patients with out-of-hospital cardiac arrest. Patients were randomized to receive either a sham (n = 200) or an active impedance threshold device (n = 200) during advanced cardiac life support performed with active compression-decompression cardiopulmonary resuscitation. The primary endpoint of this study was 24 h survival. The 24 h survival rates were 44/200 (22%) with the sham valve and 64/200 (32%) with the active valve (P = 0.02). The number of patients who had a return of spontaneous circulation (ROSC), intensive care unit (ICU) admission, and hospital discharge rates was 77 (39%), 57 (29%), and 8 (4%) in the sham valve group versus 96 (48%) (P = 0.05), 79 (40%) (P = 0.02), and 10 (5%) (P = 0.6) in the active valve group. Six out of ten survivors in the active valve group and 1/8 survivors in the sham group had normal neurological function at hospital discharge (P = 0.1). The use of an impedance valve in patients receiving active compression-decompression cardiopulmonary resuscitation for out-of-hospital cardiac arrest significantly improved 24 h survival rates.

  15. De novo peptide sequencing by deep learning

    PubMed Central

    Tran, Ngoc Hieu; Zhang, Xianglilan; Xin, Lei; Shan, Baozhen; Li, Ming

    2017-01-01

    De novo peptide sequencing from tandem MS data is the key technology in proteomics for the characterization of proteins, especially for new sequences, such as mAbs. In this study, we propose a deep neural network model, DeepNovo, for de novo peptide sequencing. DeepNovo architecture combines recent advances in convolutional neural networks and recurrent neural networks to learn features of tandem mass spectra, fragment ions, and sequence patterns of peptides. The networks are further integrated with local dynamic programming to solve the complex optimization task of de novo sequencing. We evaluated the method on a wide variety of species and found that DeepNovo considerably outperformed state of the art methods, achieving 7.7–22.9% higher accuracy at the amino acid level and 38.1–64.0% higher accuracy at the peptide level. We further used DeepNovo to automatically reconstruct the complete sequences of antibody light and heavy chains of mouse, achieving 97.5–100% coverage and 97.2–99.5% accuracy, without assisting databases. Moreover, DeepNovo is retrainable to adapt to any sources of data and provides a complete end-to-end training and prediction solution to the de novo sequencing problem. Not only does our study extend the deep learning revolution to a new field, but it also shows an innovative approach in solving optimization problems by using deep learning and dynamic programming. PMID:28720701

  16. Randomized Double-Blind Phase III Pivotal Field Trial of the Comparative Immunogenicity Safety and Tolerability of Two Yellow Fever 17D Vaccines (ARILVAX(Trademark) and YF-VAX(Trademark)) in Healthy Infants and Children in Peru

    DTIC Science & Technology

    2004-08-17

    Expertos: Estrategias de Prevencion y Control de la Fiebre Amarilla y Riesgo de Urbanizacion en las Americas. May 14–15, 1998. Lima, Peru: U.S. Agency...which vaccination is indicated in en - demic areas. The origin, derivation, production, and genomic sequences of these vaccines have been previously de...by study site, mean age, sex, weight (in kg), height (in cm), body mass index (kg/m2), pulse (beats per minute), allergy history, anaphylac- tic

  17. Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System.

    PubMed

    Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug

    2018-04-30

    The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  18. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  19. Multithreaded implicitly dealiased convolutions

    NASA Astrophysics Data System (ADS)

    Roberts, Malcolm; Bowman, John C.

    2018-03-01

    Implicit dealiasing is a method for computing in-place linear convolutions via fast Fourier transforms that decouples work memory from input data. It offers easier memory management and, for long one-dimensional input sequences, greater efficiency than conventional zero-padding. Furthermore, for convolutions of multidimensional data, the segregation of data and work buffers can be exploited to reduce memory usage and execution time significantly. This is accomplished by processing and discarding data as it is generated, allowing work memory to be reused, for greater data locality and performance. A multithreaded implementation of implicit dealiasing that accepts an arbitrary number of input and output vectors and a general multiplication operator is presented, along with an improved one-dimensional Hermitian convolution that avoids the loop dependency inherent in previous work. An alternate data format that can accommodate a Nyquist mode and enhance cache efficiency is also proposed.

  20. Detecting atrial fibrillation by deep convolutional neural networks.

    PubMed

    Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui

    2018-02-01

    Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Effect of lower limb compression on blood flow and performance in elite wheelchair rugby athletes

    PubMed Central

    Vaile, Joanna; Stefanovic, Brad; Askew, Christopher D.

    2016-01-01

    Objective To investigate the effects of compression socks worn during exercise on performance and physiological responses in elite wheelchair rugby athletes. Design In a non-blinded randomized crossover design, participants completed two exercise trials (4 × 8 min bouts of submaximal exercise, each finishing with a timed maximal sprint) separated by 24 hr, with or without compression socks. Setting National Sports Training Centre, Queensland, Australia. Participants Ten national representative male wheelchair rugby athletes with cervical spinal cord injuries volunteered to participate. Interventions Participants wore medical grade compression socks on both legs during the exercise task (COMP), and during the control trial no compression was worn (CON). Outcome Measures The efficacy of the compression socks was determined by assessments of limb blood flow, core body temperature, heart rate, and ratings of perceived exertion, perceived thermal strain, and physical performance. Results While no significant differences between conditions were observed for maximal sprint time, average lap time was better maintained in COMP compared to CON (P<0.05). Lower limb blood flow increased from pre- to post-exercise by the same magnitude in both conditions (COMP: 2.51 ± 2.34; CON: 2.20 ± 1.85 ml.100 ml.−1min−1), whereas there was a greater increase in upper limb blood flow pre- to post-exercise in COMP (10.77 ± 8.24 ml.100 ml.−1min−1) compared to CON (6.21 ± 5.73 ml.100 ml.−1min−1; P < 0.05). Conclusion These findings indicate that compression socks worn during exercise is an effective intervention for maintaining submaximal performance during wheelchair exercise, and this performance benefit may be associated with an augmentation of upper limb blood flow. PMID:25582434

  2. Instructions to "put the phone down" do not improve the quality of bystander initiated dispatcher-assisted cardiopulmonary resuscitation.

    PubMed

    Brown, Todd B; Saini, Devashish; Pepper, Tracy; Mirza, Muzna; Nandigam, Hari Krishna; Kaza, Niroop; Cofield, Stacey S

    2008-02-01

    The quality of early bystander CPR appears important in maximizing survival. This trial tests whether explicit instructions to "put the phone down" improve the quality of bystander initiated dispatch-assisted CPR. In a randomized, double-blinded, controlled trial, subjects were randomized to a modified version of the Medical Priority Dispatch System (MPDS) version 11.2 protocol or a simplified protocol, each with or without instruction to "put the phone down" during CPR. Data were recorded from a Laerdal Resusci Anne Skillreporter manikin. A simulated emergency medical dispatcher, contacted by cell phone, delivered standardized instructions. Primary outcome measures included chest compression rate, depth, and the proportion of compressions without error, with correct hand position, adequate depth, and total release. Time was measured in two distinct ways: time required for initiation of CPR and total amount of time hands were off the chest during CPR. Proportions were analyzed by Wilcoxon rank sum tests and time variables with ANOVA. All tests used a two-sided alpha-level of 0.05. Two hundred and fifteen subjects were randomized-107 in the "put the phone down" instruction group and 108 in the group without "put the phone down" instructions. The groups were comparable across demographic and experiential variables. The additional instruction to "put the phone down" had no effect on the proportion of compressions administered without error, with the correct depth, and with the correct hand position. Likewise, "put the phone down" did not affect the average compression depth, the average compression rate, the total hands-off-chest time, or the time to initiate chest compressions. A statistically significant, yet trivial, effect was found in the proportion of compressions with total release of the chest wall. Instructions to "put the phone down" had no effect on the quality of bystander initiated dispatcher-assisted CPR in this trial.

  3. Une cause rare de compression médullaire: kyste arachnoïdien épidural rachidien (à propos de 03 cas)

    PubMed Central

    El Saqui, Abderrazzak; Aggouri, Mohamed; Benzagmout, Mohamed; Chakour, Khalid; Chaoui, Mohamed El Faiz

    2017-01-01

    Le kyste arachnoïdien épidural rachidien (KAER) est une affection bénigne, de physiopathologie encore incertaine. Il est le plus souvent asymptomatique mais peut causer des séquelles neurologiques graves surtout quand le traitement n'est pas instauré à temps. Nous rapportons l'expérience du service de Neurochirurgie CHU Hassan II- Fès concernant la prise en charge du KAER à travers l'analyse rétrospective de trois cas. Il s'agit de deux patients de sexe masculin et d'une femme, d'âge moyen de 35 ans (Extrêmes: 16 et 56 ans), tous admis pour un tableau de compression médullaire lente. Tous nos patients ont bénéficié d'une IRM médullaire qui a mis en évidence une collection liquidienne de siège épidural, ayant le même signal que le LCR, comprimant la moelle en regard. Le siège de la collection était thoracique dans tous les cas. Tous nos patients ont été opérés par voie postérieure avec exérèse du kyste et ligature du collet dans deux cas et une plastie durale dans un seul cas. L'étude anatomopathologique a conclu en un kyste arachnoïdien. L'évolution postopératoire était favorable dans tous les cas. Ce travail a comme objectif de mettre le point sur cette pathologie tout en insistant sur la nécessité d'une prise en charge précoce, vu la tendance vers l'aggravation progressive en l'absence de thérapie adaptée et rappeler les particularités cliniques, paracliniques et thérapeutiques de cette affection. PMID:28533855

  4. Magnetic resonance direct thrombus imaging of the evolution of acute deep vein thrombosis of the leg.

    PubMed

    Westerbeek, R E; Van Rooden, C J; Tan, M; Van Gils, A P G; Kok, S; De Bats, M J; De Roos, A; Huisman, M V

    2008-07-01

    Accurate diagnosis of acute recurrent deep vein thrombosis (DVT) is relevant to avoid improper diagnosis and unnecessary life-long anticoagulant treatment. Compression ultrasound has high accuracy for a first episode of DVT, but is often unreliable in suspected recurrent disease. Magnetic resonance direct thrombus imaging (MR DTI) has been shown to accurately detect acute DVT. The purpose of this prospective study was to determine the MR signal change during 6 months follow-up in patients with acute DVT. This study was a prospective study of 43 consecutive patients with a first episode of acute DVT demonstrated by compression ultrasound. All patients underwent MR DTI. Follow-up was performed with MR-DTI and compression ultrasound at 3 and 6 months respectively. All data were coded, stored and assessed by two blinded observers. MR direct thrombus imaging identified acute DVT in 41 of 43 patients (sensitivity 95%). There was no abnormal MR-signal in controls, or in the contralateral extremity of patients with DVT (specificity 100%). In none of the 39 patients available at 6 months follow-up was the abnormal MR-signal at the initial acute DVT observed, whereas in 12 of these patients (30.8%) compression ultrasound was still abnormal. Magnetic resonance direct thrombus imaging normalizes over a period of 6 months in all patients with diagnosed DVT, while compression ultrasound remains abnormal in a third of these patients. MR-DTI may potentially allow for accurate detection in patients with acute suspected recurrent DVT, and this should be studied prospectively.

  5. Supraorbital keyhole surgery for optic nerve decompression and dura repair.

    PubMed

    Chen, Yuan-Hao; Lin, Shinn-Zong; Chiang, Yung-Hsiao; Ju, Da-Tong; Liu, Ming-Ying; Chen, Guann-Juh

    2004-07-01

    Supraorbital keyhole surgery is a limited surgical procedure with reduced traumatic manipulation of tissue and entailing little time in the opening and closing of wounds. We utilized the approach to treat head injury patients complicated with optic nerve compression and cerebrospinal fluid leakage (CSF). Eleven cases of basal skull fracture complicated with either optic nerve compression and/or CSF leakage were surgically treated at our department from February 1995 to June 1999. Six cases had primary optic nerve compression, four had CSF leakage and one case involved both injuries. Supraorbital craniotomy was carried out using a keyhole-sized burr hole plus a small craniotomy. The size of craniotomy approximated 2 x 3 cm2. The optic nerve was decompressed via removal of the optic canal roof and anterior clinoid process with high-speed drills. The defect of dura was repaired with two pieces of tensa fascia lata that were attached on both sides of the torn dural defect with tissue glue. Seven cases with optic nerve injury included five cases of total blindness and two cases of light perception before operation. Vision improved in four cases. The CSF leakage was stopped successfully in all four cases without complication. As optic nerve compression and CSF leakage are skull base lesions, the supraorbital keyhole surgery constitutes a suitable approach. The supraorbital keyhole surgery allows for an anterior approach to the skull base. This approach also allows the treatment of both CSF leakage and optic nerve compression. Our results indicate that supraorbital keyhole operation is a safe and effective method for preserving or improving vision and attenuating CSF leakage following injury.

  6. Effect of pre-straining on the evolution of material anisotropy in rolled magnesium alloy AZ31 sheet

    NASA Astrophysics Data System (ADS)

    Qiao, H.; Guo, X. Q.; Wu, P. D.

    2013-12-01

    The large strain Elastic Visco-Plastic Self-Consistent (EVPSC) model and the recently developed Twinning and De-Twinning (TDT) model are applied to study the mechanical behavior of rolled magnesium alloy AZ31 sheet. Three different specimen orientations with tilt angles of 0°, 45° and 90° between the rolling direction and longitudinal specimen axis are used to study the mechanical anisotropy under both uniaxial tension and compression. The effect of pre-strain in uniaxial compression along the rolling direction on subsequent uniaxial tension/compression along the three directions is also investigated. It is demonstrated that the twinning during pre-strain in compression and the detwinning in the subsequent deformation have a significant influence on the mechanical anisotropy. Numerical results are in good agreement with the experimental observations found in the literature.

  7. Beyond the Borders: A Partnership between U.S. and Mexican Schools for Students Who Are Visually Impaired. Practice Report

    ERIC Educational Resources Information Center

    Wood, Jackie; Poel, Elissa Wolfe

    2006-01-01

    Since 2002, the New Mexico School for the Blind and Visually Impaired (NMSBVI) in Alamogordo, New Mexico, has worked to create a partnership with the "Centro de Capacitacion para Invidentes" in Durango, Mexico, and the "Instituto de Asesoria y Apoyo para Ciegor" in Ciudad Juarez, Mexico. The purpose of this association was to…

  8. Compression Strength of Sulfur Concrete Subjected to Extreme Cold

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.

    2008-01-01

    Sulfur concrete cubes were cycled between liquid nitrogen and room temperature to simulate extreme exposure conditions. Subsequent compression testing showed the strength of cycled samples to be roughly five times less than those non-cycled. Fracture surface examination showed de-bonding of the sulfur from the aggregate material in the cycled samples but not in those non-cycled. The large discrepancy found, between the samples is attributed to the relative thermal properties of the materials constituting the concrete.

  9. Left atrium and pulmonary artery compression due to aortic aneurysm causing heart failure symptoms.

    PubMed

    Jorge, Antonio José Lagoeiro; Martins, Wolney de Andrade; Moutinho, Victor M; Rezende, Juliano M; Alves, Patricia Y; Villacorta, Humberto; Silveira, Pedro F; Couto, Antonio A

    2018-06-01

    Patients with thoracic aortic aneurysm (TAA) are mostly asymptomatic and TAA is rarely related to heart failure (HF). We report the case of an 80-year-old female patient, with type A TAA without dissection, with right pulmonary artery and left atrium compression, who presented with HF, preserved ejection fraction and acute pulmonary edema. Copyright © 2018 Sociedade Portuguesa de Cardiologia. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  11. Automated Detection of Obstructive Sleep Apnea Events from a Single-Lead Electrocardiogram Using a Convolutional Neural Network.

    PubMed

    Urtnasan, Erdenebayar; Park, Jong-Uk; Joo, Eun-Yeon; Lee, Kyoung-Joung

    2018-04-23

    In this study, we propose a method for the automated detection of obstructive sleep apnea (OSA) from a single-lead electrocardiogram (ECG) using a convolutional neural network (CNN). A CNN model was designed with six optimized convolution layers including activation, pooling, and dropout layers. One-dimensional (1D) convolution, rectified linear units (ReLU), and max pooling were applied to the convolution, activation, and pooling layers, respectively. For training and evaluation of the CNN model, a single-lead ECG dataset was collected from 82 subjects with OSA and was divided into training (including data from 63 patients with 34,281 events) and testing (including data from 19 patients with 8571 events) datasets. Using this CNN model, a precision of 0.99%, a recall of 0.99%, and an F 1 -score of 0.99% were attained with the training dataset; these values were all 0.96% when the CNN was applied to the testing dataset. These results show that the proposed CNN model can be used to detect OSA accurately on the basis of a single-lead ECG. Ultimately, this CNN model may be used as a screening tool for those suspected to suffer from OSA.

  12. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  13. Application of wavelet packet transform to compressing Raman spectra data

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.

  14. Improved convolutional coding

    NASA Technical Reports Server (NTRS)

    Doland, G. D.

    1970-01-01

    Convolutional coding, used to upgrade digital data transmission under adverse signal conditions, has been improved by a method which ensures data transitions, permitting bit synchronizer operation at lower signal levels. Method also increases decoding ability by removing ambiguous condition.

  15. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.

  16. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  17. Performance Evaluation of UHF Fading Satellite Channel by Simulation for Different Modulation Schemes

    DTIC Science & Technology

    1992-12-01

    views expressed in this thesis are those of the author end do net reflect olicsia policy or pokletsm of the Deperteaset of Defame or the US...utempl u v= cncd (2,1,6,G64,u,zeros(l,12));%Convolutional encoding mm=bm(2,v); %Binary to M-ary conversion clear v u; mm=inter(50,200,mm);%Interleaving (50...save result err B. CNCD.X (CONVOLUTIONAL ENCODER FUNCTION) function (v,vr] - cncd (n,k,m,Gr,u,r) % CONVOLUTIONAL ENCODER % Paul H. Moose % Naval

  18. Time history solution program, L225 (TEV126). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.

    1979-01-01

    Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.

  19. Transfer Function Bounds for Partial-unit-memory Convolutional Codes Based on Reduced State Diagram

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    The performance of a coding system consisting of a convolutional encoder and a Viterbi decoder is analytically found by the well-known transfer function bounding technique. For the partial-unit-memory byte-oriented convolutional encoder with m sub 0 binary memory cells and (k sub 0 m sub 0) inputs, a state diagram of 2(K) (sub 0) was for the transfer function bound. A reduced state diagram of (2 (m sub 0) +1) is used for easy evaluation of transfer function bounds for partial-unit-memory codes.

  20. A Projection of the Characteristics of Group 4 Facsimile Equipment.

    DTIC Science & Technology

    1981-02-01

    et de transfert quasi nu[ pour i compression d’impulsion donnd A cc genre de f <cJ ci! etf fo +Af. fitre ne modiflant pas Ia phase filtre adaptd : Ia...be used on the public telephone network where an DD, 1473k EiTiow or, Nov6 s OBSOLETE SECURITY CLASSIFICATION OF TN;S PAGE (When De E...are not necessarily visible to host computers attached to the network. Datagram: A finite length packet of data together with des - tination host

  1. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951

  2. [Pediatric orbital emphysema caused by a compressed-air pistol shot: a case report].

    PubMed

    Navarro-Mingorance, A; Reyes-Dominguez, S B; León-León, M C

    2014-09-01

    We report the case of a 2 year-old child with orbital emphysema secondary to a compressed-air gun shot in the malar region, with no evidence of orbital wall fracture. Conservative treatment was applied, and no complications were observed. Orbital emphysema in the absence of an orbital wall fracture is a rare situation. Orbital emphysema is usually seen in facial trauma associated with damage to the adjacent paranasal sinuses or facial bones. To our knowledge there have been very few reports of orbital emphysema caused by a compressed-air injury. Copyright © 2012 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.

  3. Ultrafast compression of graphite observed with sub-ps time resolution diffraction on LCLS

    NASA Astrophysics Data System (ADS)

    Armstrong, Michael; Goncharov, A.; Crowhurst, J.; Zaug, J.; Radousky, H.; Grivickas, P.; Bastea, S.; Goldman, N.; Stavrou, E.; Belof, J.; Gleason, A.; Lee, H. J.; Nagler, R.; Holtgrewe, N.; Walter, P.; Pakaprenka, V.; Nam, I.; Granados, E.; Presher, C.; Koroglu, B.

    2017-06-01

    We will present ps time resolution pulsed x-ray diffraction measurements of rapidly compressed highly oriented pyrolytic graphite along its basal plane at the Materials under Extreme Conditions (MEC) sector of the Linac Coherent Light Source (LCLS). These experiments explore the possibility of rapid (<100 ps time scale) material transformations occurring under very highly anisotropic compression conditions. Under such conditions, non-equilibrium mechanisms may play a role in the transformation process. We will present experimental results and simulations which explore this possibility. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Security, LLC under Contract DE-AC52-07NA27344.

  4. Simulation of ICD-9 to ICD-10-CM Transition for Family Medicine: Simple or Convoluted?

    PubMed

    Grief, Samuel N; Patel, Jesal; Kochendorfer, Karl M; Green, Lee A; Lussier, Yves A; Li, Jianrong; Burton, Michael; Boyd, Andrew D

    2016-01-01

    The objective of this study was to examine the impact of the transition from International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM), to Interactional Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM), on family medicine and to identify areas where additional training might be required. Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million in claims). Using the science of networks, we evaluated each ICD-9-CM code used by family medicine physicians to determine whether the transition was simple or convoluted. A simple transition is defined as 1 ICD-9-CM code mapping to 1 ICD-10-CM code, or 1 ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is nonreciprocal and complex, with multiple codes for which definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Of the 1635 diagnosis codes used by family medicine physicians, 70% of the codes were categorized as simple, 27% of codes were convoluted, and 3% had no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims was similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only <0.1% of the overall diagnosis codes. The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, and for which additional resources need to be invested to ensure a successful transition to ICD-10-CM. © Copyright 2016 by the American Board of Family Medicine.

  5. Simulation of ICD-9 to ICD-10-CM transition for family medicine: simple or convoluted?

    PubMed Central

    Grief, Samuel N.; Patel, Jesal; Lussier, Yves A.; Li, Jianrong; Burton, Michael; Boyd, Andrew D.

    2017-01-01

    Objectives The objective of this study was to examine the impact of the transition from International Classification of Disease Version Nine Clinical Modification (ICD-9-CM) to Interactional Classification of Disease Version Ten Clinical Modification (ICD-10-CM) on family medicine and identify areas where additional training might be required. Methods Family medicine ICD-9-CM codes were obtained from an Illinois Medicaid data set (113,000 patient visits and $5.5 million dollars in claims). Using the science of networks we evaluated each ICD-9-CM code used by family medicine physicians to determine if the transition was simple or convoluted.1 A simple translation is defined as one ICD-9-CM code mapping to one ICD-10-CM code or one ICD-9-CM code mapping to multiple ICD-10-CM codes. A convoluted transition is where the transitions between coding systems is non-reciprocal and complex with multiple codes where definitions become intertwined. Three family medicine physicians evaluated the most frequently encountered complex mappings for clinical accuracy. Results Of the 1635 diagnosis codes used by the family medicine physicians, 70% of the codes were categorized as simple, 27% of the diagnosis codes were convoluted and 3% were found to have no mapping. For the visits, 75%, 24%, and 1% corresponded with simple, convoluted, and no mapping, respectively. Payment for submitted claims were similarly aligned. Of the frequently encountered convoluted codes, 3 diagnosis codes were clinically incorrect, but they represent only < 0.1% of the overall diagnosis codes. Conclusions The transition to ICD-10-CM is simple for 70% or more of diagnosis codes, visits, and reimbursement for a family medicine physician. However, some frequently used codes for disease management are convoluted and incorrect, where additional resources need to be invested to ensure a successful transition to ICD-10-CM. PMID:26769875

  6. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1994-01-01

    Brief summaries of research in the following areas are presented: (1) construction of optimum geometrically uniform trellis codes; (2) a statistical approach to constructing convolutional code generators; and (3) calculating the exact performance of a convolutional code.

  7. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  8. Detection of prostate cancer on multiparametric MRI

    NASA Astrophysics Data System (ADS)

    Seah, Jarrel C. Y.; Tang, Jennifer S. N.; Kitchen, Andy

    2017-03-01

    In this manuscript, we describe our approach and methods to the ProstateX challenge, which achieved an overall AUC of 0.84 and the runner-up position. We train a deep convolutional neural network to classify lesions marked on multiparametric MRI of the prostate as clinically significant or not. We implement a novel addition to the standard convolutional architecture described as auto-windowing which is clinically inspired and designed to overcome some of the difficulties faced in MRI interpretation, where high dynamic ranges and low contrast edges may cause difficulty for traditional convolutional neural networks trained on high contrast natural imagery. We demonstrate that this system can be trained end to end and outperforms a similar architecture without such additions. Although a relatively small training set was provided, we use extensive data augmentation to prevent overfitting and transfer learning to improve convergence speed, showing that deep convolutional neural networks can be feasibly trained on small datasets.

  9. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  10. Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography

    NASA Astrophysics Data System (ADS)

    Menke, W. H.

    2017-12-01

    We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.

  11. Transform push, oblique subduction resistance, and intraplate stress of the Juan de Fuca plate

    USGS Publications Warehouse

    Wang, K.; He, J.; Davis, E.E.

    1997-01-01

    The Juan de Fuca plate is a small oceanic plate between the Pacific and North America plates. In the southernmost region, referred to as the Gorda deformation zone, the maximum compressive stress a, constrained by earthquake focal mechanisms is N-S. Off Oregon, and possibly off Washington, NW trending left-lateral faults cutting the Juan de Fuca plate indicate a a, in a NE-SW to E-W direction. The magnitude of differential stress increases from north to south; this is inferred from the plastic yielding and distribution of earthquakes throughout the Gorda deformation zone. To understand how tectonic forces determine the stress field of the Juan de Fuca plate, we have modeled the intraplate stress using both elastic and elastic-perfectly plastic plane-stress finite element models. We conclude that the right-lateral shear motion of the Pacific and North America plates is primarily responsible for the stress pattern of the Juan de Fuca plate. The most important roles are played by a compressional force normal to the Mendocino transform fault, a result of the northward push by the Pacific plate and a horizontal resistance operating against the northward, or margin-parallel, component of oblique subduction. Margin-parallel subduction resistance results in large N-S compression in the Gorda deformation zone because the force is integrated over the full length of the Cascadia subduction zone. The Mendocino transform fault serves as a strong buttress that is very weak in shear but capable of transmitting large strike-normal compressive stresses. Internal failure of the Gorda deformation zone potentially places limits on the magnitude of the fault-normal stresses being transmitted and correspondingly on the magnitude of strike-parallel subduction resistance. Transform faults and oblique subduction zones in other parts of the world can be expected to transmit and create stresses in the same manner. Copyright 1997 by the American Geophysical Union.

  12. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  13. Inertial Confinement Fusion as an Extreme Example of Dynamic Compression

    NASA Astrophysics Data System (ADS)

    Moses, E.

    2013-06-01

    Initiating and controlling thermonuclear burn at the national ignition facility (NIF) will require the manipulation of matter to extreme energy densities. We will discuss recent advances in both controlling the dynamic compression of ignition targets and our understanding of the physical states and processes leading to ignition. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory in part under Contract W-7405-Eng-48 and in part under Contract DE-AC52-07NA27344.

  14. Translations on Narcotics and Dangerous Drugs, Number 283

    DTIC Science & Technology

    1977-02-03

    in Tamaulipas Pledged (EL SOL DE MEXICO, 27 Dec 76) 45 Police Uncover Drug Ring Operated From Penitentiary (Various sources, various dates) 46...EL SOL DE MEXICO, various dates) 51 One in Four in Culiacan Is an Addict Culiacan Police Turn Blind Eye to Drug Traffic - c - CONTENTS...in all likelihood. 8599 CSO: 5330 37 MEXICO REASSIGNED DRUG CAMPAIGN COORDINATOR REVIEWS SIX-MONTH CAMPAIGN RESULTS Ciudad Juarez EL FRONTERIZO

  15. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407

  16. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  17. A unitary convolution approximation for the impact-parameter dependent electronic energy loss

    NASA Astrophysics Data System (ADS)

    Schiwietz, G.; Grande, P. L.

    1999-06-01

    In this work, we propose a simple method to calculate the impact-parameter dependence of the electronic energy loss of bare ions for all impact parameters. This perturbative convolution approximation (PCA) is based on first-order perturbation theory, and thus, it is only valid for fast particles with low projectile charges. Using Bloch's stopping-power result and a simple scaling, we get rid of the restriction to low charge states and derive the unitary convolution approximation (UCA). Results of the UCA are then compared with full quantum-mechanical coupled-channel calculations for the impact-parameter dependent electronic energy loss.

  18. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  19. On the application of a fast polynomial transform and the Chinese remainder theorem to compute a two-dimensional convolution

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.

    1980-01-01

    A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.

  20. Cascaded K-means convolutional feature learner and its application to face recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu

    2017-09-01

    Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.

  1. Efficient Modeling of Gravity Fields Caused by Sources with Arbitrary Geometry and Arbitrary Density Distribution

    NASA Astrophysics Data System (ADS)

    Wu, Leyuan

    2018-01-01

    We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.

  2. A convolutional neural network to filter artifacts in spectroscopic MRI.

    PubMed

    Gurbani, Saumya S; Schreibmann, Eduard; Maudsley, Andrew A; Cordova, James Scott; Soher, Brian J; Poptani, Harish; Verma, Gaurav; Barker, Peter B; Shim, Hyunsuk; Cooper, Lee A D

    2018-03-09

    Proton MRSI is a noninvasive modality capable of generating volumetric maps of in vivo tissue metabolism without the need for ionizing radiation or injected contrast agent. Magnetic resonance spectroscopic imaging has been shown to be a viable imaging modality for studying several neuropathologies. However, a key hurdle in the routine clinical adoption of MRSI is the presence of spectral artifacts that can arise from a number of sources, possibly leading to false information. A deep learning model was developed that was capable of identifying and filtering out poor quality spectra. The core of the model used a tiled convolutional neural network that analyzed frequency-domain spectra to detect artifacts. When compared with a panel of MRS experts, our convolutional neural network achieved high sensitivity and specificity with an area under the curve of 0.95. A visualization scheme was implemented to better understand how the convolutional neural network made its judgement on single-voxel or multivoxel MRSI, and the convolutional neural network was embedded into a pipeline capable of producing whole-brain spectroscopic MRI volumes in real time. The fully automated method for assessment of spectral quality provides a valuable tool to support clinical MRSI or spectroscopic MRI studies for use in fields such as adaptive radiation therapy planning. © 2018 International Society for Magnetic Resonance in Medicine.

  3. Dynamic XRD, Shock and Static Compression of CaF2

    NASA Astrophysics Data System (ADS)

    Kalita, Patricia; Specht, Paul; Root, Seth; Sinclair, Nicholas; Schuman, Adam; White, Melanie; Cornelius, Andrew; Smith, Jesse; Sinogeikin, Stanislav

    2017-06-01

    The high-pressure behavior of CaF2 is probed with x-ray diffraction (XRD) combined with both dynamic compression, using a two-stage light gas gun, and static compression, using diamond anvil cells. We use XRD to follow the unfolding of a shock-driven, fluorite to cotunnite phase transition, on the timescale of nanoseconds. The dynamic behavior of CaF2 under shock loading is contrasted with that under static compression. This work leverages experimental capabilities at the Advanced Photon Source: dynamic XRD and shock experiments at the Dynamic Compression Sector, as well as XRD and static compression in diamond anvil cell at the High-Pressure Collaborative Access Team. These experiments and cross-platform comparisons, open the door to an unprecedented understanding of equations of state and phase transitions at the microstructural level and at different time scales and will ultimately improve our capability to simulate the behavior of materials at extreme conditions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Diagnostic performance of dual-energy contrast-enhanced subtracted mammography in dense breasts compared to mammography alone: interobserver blind-reading analysis.

    PubMed

    Cheung, Yun-Chung; Lin, Yu-Ching; Wan, Yung-Liang; Yeow, Kee-Min; Huang, Pei-Chin; Lo, Yung-Feng; Tsai, Hsiu-Pei; Ueng, Shir-Hwa; Chang, Chee-Jen

    2014-10-01

    To analyse the accuracy of dual-energy contrast-enhanced spectral mammography in dense breasts in comparison with contrast-enhanced subtracted mammography (CESM) and conventional mammography (Mx). CESM cases of dense breasts with histological proof were evaluated in the present study. Four radiologists with varying experience in mammography interpretation blindly read Mx first, followed by CESM. The diagnostic profiles, consistency and learning curve were analysed statistically. One hundred lesions (28 benign and 72 breast malignancies) in 89 females were analysed. Use of CESM improved the cancer diagnosis by 21.2 % in sensitivity (71.5 % to 92.7 %), by 16.1 % in specificity (51.8 % to 67.9 %) and by 19.8 % in accuracy (65.9 % to 85.8 %) compared with Mx. The interobserver diagnostic consistency was markedly higher using CESM than using Mx alone (0.6235 vs. 0.3869 using the kappa ratio). The probability of a correct prediction was elevated from 80 % to 90 % after 75 consecutive case readings. CESM provided additional information with consistent improvement of the cancer diagnosis in dense breasts compared to Mx alone. The prediction of the diagnosis could be improved by the interpretation of a significant number of cases in the presence of 6 % benign contrast enhancement in this study. • DE-CESM improves the cancer diagnosis in dense breasts compared with mammography. • DE-CESM shows greater consistency than mammography alone by interobserver blind reading. • Diagnostic improvement of DE-CESM is independent of the mammographic reading experience.

  5. Enhanced line integral convolution with flow feature detection

    DOT National Transportation Integrated Search

    1995-01-01

    Prepared ca. 1995. The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain [Cabral & Leedom '93]. The method produces a flow texture imag...

  6. "Push as hard as you can" instruction for telephone cardiopulmonary resuscitation: a randomized simulation study.

    PubMed

    van Tulder, Raphael; Roth, Dominik; Havel, Christof; Eisenburger, Philip; Heidinger, Benedikt; Chwojka, Christof Constantin; Novosad, Heinz; Sterz, Fritz; Herkner, Harald; Schreiber, Wolfgang

    2014-03-01

    The medical priority dispatch system (MPDS®) assists lay rescuers in protocol-driven telephone-assisted cardiopulmonary resuscitation (CPR). Our aim was to clarify which CPR instruction leads to sufficient compression depth. This was an investigator-blinded, randomized, parallel group, simulation study to investigate 10 min of chest compressions after the instruction "push down firmly 5 cm" vs. "push as hard as you can." Primary outcome was defined as compression depth. Secondary outcomes were participants exertion measured by Borg scale, provider's systolic and diastolic blood pressure, and quality values measured by the skill-reporting program of the Resusci(®) Anne Simulator manikin. For the analysis of the primary outcome, we used a linear random intercept model to allow for the repeated measurements with the intervention as a covariate. Thirteen participants were allocated to control and intervention. One participant (intervention) dropped out after min 7 because of exhaustion. Primary outcome showed a mean compression depth of 44.1 mm, with an inter-individual standard deviation (SDb) of 13.0 mm and an intra-individual standard deviation (SDw) of 6.7 mm for the control group vs. 46.1 mm and a SDb of 9.0 mm and SDw of 10.3 mm for the intervention group (difference: 1.9; 95% confidence interval -6.9 to 10.8; p = 0.66). Secondary outcomes showed no difference for exhaustion and CPR-quality values. There is no difference in compression depth, quality of CPR, or physical strain on lay rescuers using the initial instruction "push as hard as you can" vs. the standard MPDS(®) instruction "push down firmly 5 cm." Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Magnetic resonance imaging validation of pituitary gland compression and distortion by typical sellar pathology.

    PubMed

    Cho, Charles H; Barkhoudarian, Garni; Hsu, Liangge; Bi, Wenya Linda; Zamani, Amir A; Laws, Edward R

    2013-12-01

    Identification of the normal pituitary gland is an important component of presurgical planning, defining many aspects of the surgical approach and facilitating normal gland preservation. Magnetic resonance imaging is a proven imaging modality for optimal soft-tissue contrast discrimination in the brain. This study is designed to validate the accuracy of localization of the normal pituitary gland with MRI in a cohort of surgical patients with pituitary mass lesions, and to evaluate for correlation between presurgical pituitary hormone values and pituitary gland characteristics on neuroimaging. Fifty-eight consecutive patients with pituitary mass lesions were included in the study. Anterior pituitary hormone levels were measured preoperatively in all patients. Video recordings from the endoscopic or microscopic surgical procedures were available for evaluation in 47 cases. Intraoperative identification of the normal gland was possible in 43 of 58 cases. Retrospective MR images were reviewed in a blinded fashion for the 43 cases, emphasizing the position of the normal gland and the extent of compression and displacement by the lesion. There was excellent agreement between imaging and surgery in 84% of the cases for normal gland localization, and in 70% for compression or noncompression of the normal gland. There was no consistent correlation between preoperative pituitary dysfunction and pituitary gland localization on imaging, gland identification during surgery, or pituitary gland compression. Magnetic resonance imaging proved to be accurate in identifying the normal gland in patients with pituitary mass lesions, and was useful for preoperative surgical planning.

  8. Aspirin in venous leg ulcer study (ASPiVLU): study protocol for a randomised controlled trial.

    PubMed

    Weller, Carolina D; Barker, Anna; Darby, Ian; Haines, Terrence; Underwood, Martin; Ward, Stephanie; Aldons, Pat; Dapiran, Elizabeth; Madan, Jason J; Loveland, Paula; Sinha, Sankar; Vicaretti, Mauro; Wolfe, Rory; Woodward, Michael; McNeil, John

    2016-04-11

    Venous leg ulceration is a common and costly problem that is expected to worsen as the population ages. Current treatment is compression therapy; however, up to 50 % of ulcers remain unhealed after 2 years, and ulcer recurrence is common. New treatments are needed to address those wounds that are more challenging to heal. Targeting the inflammatory processes present in venous ulcers is a possible strategy. Limited evidence suggests that a daily dose of aspirin may be an effective adjunct to aid ulcer healing and reduce recurrence. The Aspirin in Venous Leg Ulcer study (ASPiVLU) will investigate whether 300-mg oral doses of aspirin improve time to healing. This randomised, double-blinded, multicentre, placebo-controlled, clinical trial will recruit participants with venous leg ulcers from community settings and hospital outpatient wound clinics across Australia. Two hundred sixty-eight participants with venous leg ulcers will be randomised to receive either aspirin or placebo, in addition to compression therapy, for 24 weeks. The primary outcome is time to healing within 12 weeks. Secondary outcomes are ulcer recurrence, wound pain, quality of life and wellbeing, adherence to study medication, adherence to compression therapy, serum inflammatory markers, hospitalisations, and adverse events at 24 weeks. The ASPiVLU trial will investigate the efficacy and safety of aspirin as an adjunct to compression therapy to treat venous leg ulcers. Study completion is anticipated to occur in December 2018. Australian New Zealand Clinical Trials Registry, ACTRN12614000293662.

  9. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing

    PubMed Central

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; LeCun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons. PMID:22518097

  10. Comparison between Frame-Constrained Fix-Pixel-Value and Frame-Free Spiking-Dynamic-Pixel ConvNets for Visual Processing.

    PubMed

    Farabet, Clément; Paz, Rafael; Pérez-Carrasco, Jose; Zamarreño-Ramos, Carlos; Linares-Barranco, Alejandro; Lecun, Yann; Culurciello, Eugenio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2012-01-01

    Most scene segmentation and categorization architectures for the extraction of features in images and patches make exhaustive use of 2D convolution operations for template matching, template search, and denoising. Convolutional Neural Networks (ConvNets) are one example of such architectures that can implement general-purpose bio-inspired vision systems. In standard digital computers 2D convolutions are usually expensive in terms of resource consumption and impose severe limitations for efficient real-time applications. Nevertheless, neuro-cortex inspired solutions, like dedicated Frame-Based or Frame-Free Spiking ConvNet Convolution Processors, are advancing real-time visual processing. These two approaches share the neural inspiration, but each of them solves the problem in different ways. Frame-Based ConvNets process frame by frame video information in a very robust and fast way that requires to use and share the available hardware resources (such as: multipliers, adders). Hardware resources are fixed- and time-multiplexed by fetching data in and out. Thus memory bandwidth and size is important for good performance. On the other hand, spike-based convolution processors are a frame-free alternative that is able to perform convolution of a spike-based source of visual information with very low latency, which makes ideal for very high-speed applications. However, hardware resources need to be available all the time and cannot be time-multiplexed. Thus, hardware should be modular, reconfigurable, and expansible. Hardware implementations in both VLSI custom integrated circuits (digital and analog) and FPGA have been already used to demonstrate the performance of these systems. In this paper we present a comparison study of these two neuro-inspired solutions. A brief description of both systems is presented and also discussions about their differences, pros and cons.

  11. [SGLT2 inhibitors: a new therapeutic class for the treatment of type 2 diabetes mellitus].

    PubMed

    Dagan, Amir; Dagan, Bracha; SegaL, Gad

    2015-03-01

    SGLT2 (Sodium Glucose co-Transporter 2 Inhibitors) inhibitors are a new group of oral medications for the treatment of type 2 diabetes mellitus patients. These medications interfere with the process of glucose reabsorption in the proximal convoluted tubules in the kidneys, therefore increasing both glucose and water diuresis. SGLT2 inhibitors were found to be effective in lowering HbA1c levels in double-blinded studies, both as monotherapy and in combination with other oral hypoglycemic medications of various other mechanisms of action. SGLT2 Inhibitors are not a risk factor for hypoglycemia and are suitable for combination with insulin therapy. Their unique mode of action, relying on glomerular filtration, make these medication unsuitable for usage as treatment for type 2 diabetes patients who are also suffering from moderate to severe renal failure. Their main adverse effects are increased risk for urinary and genital tract infections. The following review describes the relevant pathophysiology addressed by these novel medications, evidence for efficacy and the safety profile of SGLT2 Inhibitors.

  12. The decoding of majority-multiplexed signals by means of dyadic convolution

    NASA Astrophysics Data System (ADS)

    Losev, V. V.

    1980-09-01

    The maximum likelihood method can often not be used for the decoding of majority-multiplexed signals because of the large number of computations required. This paper describes a fast dyadic convolution transform which can be used to reduce the number of computations.

  13. Directional Radiometry and Radiative Transfer: the Convoluted Path From Centuries-old Phenomenology to Physical Optics

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.

    2014-01-01

    This Essay traces the centuries-long history of the phenomenological disciplines of directional radiometry and radiative transfer in turbid media, discusses their fundamental weaknesses, and outlines the convoluted process of their conversion into legitimate branches of physical optics.

  14. Lossless Compression of Stromatolite Images: A Biogenicity Index?

    NASA Astrophysics Data System (ADS)

    Corsetti, Frank A.; Storrie-Lombardi, Michael C.

    2003-12-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attributed to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (δc, the log of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the δc of a putative biotic test stromatolite. There is a clear separation in δc between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose that this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers.

  15. Lossless compression of stromatolite images: a biogenicity index?

    PubMed

    Corsetti, Frank A; Storrie-Lombardi, Michael C

    2003-01-01

    It has been underappreciated that inorganic processes can produce stromatolites (laminated macroscopic constructions commonly attreibuted to microbiological activity), thus calling into question the long-standing use of stromatolites as de facto evidence for ancient life. Using lossless compression on unmagnified reflectance red-green-blue (RGB) images of matched stromatolite-sediment matrix pairs as a complexity metric, the compressibility index (delta(c), the log ratio of the ratio of the compressibility of the matrix versus the target) of a putative abiotic test stromatolite is significantly less than the delta(c) of a putative biotic test stromatolite. There is a clear separation in delta(c) between the different stromatolites discernible at the outcrop scale. In terms of absolute compressibility, the sediment matrix between the stromatolite columns was low in both cases, the putative abiotic stromatolite was similar to the intracolumnar sediment, and the putative biotic stromatolite was much greater (again discernible at the outcrop scale). We propose tht this metric would be useful for evaluating the biogenicity of images obtained by the camera systems available on every Mars surface probe launched to date including Viking, Pathfinder, Beagle, and the two Mars Exploration Rovers.

  16. [Application of numerical convolution in in vivo/in vitro correlation research].

    PubMed

    Yue, Peng

    2009-01-01

    This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.

  17. DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.

    PubMed

    Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh

    2017-09-01

    Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.

  18. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network.

    PubMed

    Yoon, Jaehong; Lee, Jungnyun; Whang, Mincheol

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain-computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  19. Spatial and Time Domain Feature of ERP Speller System Extracted via Convolutional Neural Network

    PubMed Central

    2018-01-01

    Feature of event-related potential (ERP) has not been completely understood and illiteracy problem remains unsolved. To this end, P300 peak has been used as the feature of ERP in most brain–computer interface applications, but subjects who do not show such peak are common. Recent development of convolutional neural network provides a way to analyze spatial and temporal features of ERP. Here, we train the convolutional neural network with 2 convolutional layers whose feature maps represented spatial and temporal features of event-related potential. We have found that nonilliterate subjects' ERP show high correlation between occipital lobe and parietal lobe, whereas illiterate subjects only show correlation between neural activities from frontal lobe and central lobe. The nonilliterates showed peaks in P300, P500, and P700, whereas illiterates mostly showed peaks in around P700. P700 was strong in both subjects. We found that P700 peak may be the key feature of ERP as it appears in both illiterate and nonilliterate subjects.

  20. Convolutional encoding of self-dual codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1994-01-01

    There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.

  1. Mapping in-vivo optic nerve head strains caused by intraocular and intracranial pressures

    NASA Astrophysics Data System (ADS)

    Tran, H.; Grimm, J.; Wang, B.; Smith, M. A.; Gogola, A.; Nelson, S.; Tyler-Kabara, E.; Schuman, J.; Wollstein, G.; Sigal, I. A.

    2017-02-01

    Although it is well documented that abnormal levels of either intraocular (IOP) or intracranial pressure (ICP) can lead to potentially blinding conditions, such as glaucoma and papilledema, little is known about how the pressures actually affect the eye. Even less is known about potential interplay between their effects, namely how the level of one pressure might alter the effects of the other. Our goal was to measure in-vivo the pressure-induced stretch and compression of the lamina cribrosa due to acute changes of IOP and ICP. The lamina cribrosa is a structure within the optic nerve head, in the back of the eye. It is important because it is in the lamina cribrosa that the pressure-induced deformations are believed to initiate damage to neural tissues leading to blindness. An eye of a rhesus macaque monkey was imaged in-vivo with optical coherence tomography while IOP and ICP were controlled through cannulas in the anterior chamber and lateral ventricle, respectively. The image volumes were analyzed with a newly developed digital image correlation technique. The effects of both pressures were highly localized, nonlinear and non-monotonic, with strong interactions. Pressure variations from the baseline normal levels caused substantial stretch and compression of the neural tissues in the posterior pole, sometimes exceeding 20%. Chronic exposure to such high levels of biomechanical insult would likely lead to neural tissue damage and loss of vision. Our results demonstrate the power of digital image correlation technique based on non-invasive imaging technologies to help understand how pressures induce biomechanical insults and lead to vision problems.

  2. Spectral interpolation - Zero fill or convolution. [image processing

    NASA Technical Reports Server (NTRS)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  3. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  4. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  5. A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format.

    PubMed

    Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís

    2017-05-01

    Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.

  6. A semi-blind logo watermarking scheme for color images by comparison and modification of DFT coefficients

    NASA Astrophysics Data System (ADS)

    Kusyk, Janusz; Eskicioglu, Ahmet M.

    2005-10-01

    Digital watermarking is considered to be a major technology for the protection of multimedia data. Some of the important applications are broadcast monitoring, copyright protection, and access control. In this paper, we present a semi-blind watermarking scheme for embedding a logo in color images using the DFT domain. After computing the DFT of the luminance layer of the cover image, the magnitudes of DFT coefficients are compared, and modified. A given watermark is embedded in three frequency bands: Low, middle, and high. Our experiments show that the watermarks extracted from the lower frequencies have the best visual quality for low pass filtering, adding Gaussian noise, JPEG compression, resizing, rotation, and scaling, and the watermarks extracted from the higher frequencies have the best visual quality for cropping, intensity adjustment, histogram equalization, and gamma correction. Extractions from the fragmented and translated image are identical to extractions from the unattacked watermarked image. The collusion and rewatermarking attacks do not provide the hacker with useful tools.

  7. Textilome abdominal: à propos d'un cas

    PubMed Central

    Serghini, Issam; El Fikri, Abdelghani; Salim Lalaoui, Jaafar; Zoubir, Mohamed; Boui, Mohammed; Boughanem, Mohamed

    2011-01-01

    Le textilome est une complication postopératoire très rare mais bien connue. Il peut s'agir d'un corps étranger composé de compresse(s) ou champ(s) chirurgicaux laissés au niveau d'un foyer opératoire. La découverte du textilome abdominale est généralement tardive. L'anamnèse est donc essentielle pour diagnostic vu que la clinique n'est pas concluante. La clinique associe des troubles chroniques du transit à des syndromes sub-occlusifs, le cliché d'abdomen sans préparation est peu contributif. L’échographie est fiable. La tomodensitométrie permet un diagnostic topographique précis. Certaines équipes proposent des explorations par IRM. Nous rapportons un cas de textilome intra abdominale, chez une patiente opérée 6 mois auparavant d'un fibrome utérin. PMID:22355422

  8. Acral melanoma detection using a convolutional neural network for dermoscopy images.

    PubMed

    Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho

    2018-01-01

    Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert's evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden's index like 0.6795, 0.6073, which were similar score with the expert. Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet.

  9. Accelerating Convolutional Sparse Coding for Curvilinear Structures Segmentation by Refining SCIRD-TS Filter Banks.

    PubMed

    Annunziata, Roberto; Trucco, Emanuele

    2016-11-01

    Deep learning has shown great potential for curvilinear structure (e.g., retinal blood vessels and neurites) segmentation as demonstrated by a recent auto-context regression architecture based on filter banks learned by convolutional sparse coding. However, learning such filter banks is very time-consuming, thus limiting the amount of filters employed and the adaptation to other data sets (i.e., slow re-training). We address this limitation by proposing a novel acceleration strategy to speed-up convolutional sparse coding filter learning for curvilinear structure segmentation. Our approach is based on a novel initialisation strategy (warm start), and therefore it is different from recent methods improving the optimisation itself. Our warm-start strategy is based on carefully designed hand-crafted filters (SCIRD-TS), modelling appearance properties of curvilinear structures which are then refined by convolutional sparse coding. Experiments on four diverse data sets, including retinal blood vessels and neurites, suggest that the proposed method reduces significantly the time taken to learn convolutional filter banks (i.e., up to -82%) compared to conventional initialisation strategies. Remarkably, this speed-up does not worsen performance; in fact, filters learned with the proposed strategy often achieve a much lower reconstruction error and match or exceed the segmentation performance of random and DCT-based initialisation, when used as input to a random forest classifier.

  10. Compressed air blast injury with palpebral, orbital, facial, cervical, and mediastinal emphysema through an eyelid laceration: a case report and review of literature.

    PubMed

    Hiraoka, Takahiro; Ogami, Tomohiro; Okamoto, Fumiki; Oshika, Tetsuro

    2013-11-07

    To the best of our knowledge, only 14 cases of orbital or periorbital compressed air injuries from air guns or hoses have been reported in the literature. A 30-year-old man was accidentally injured when a compressed air hose nozzle hit his right eye. The right half of his face was markedly swollen and a skin laceration near the right medial canthus was identified. A computed tomography scan showed subcutaneous and intraorbital emphysema around the right eye as well as cervical and mediastinal emphysema. He was prophylactically treated with systemic and topical antibiotics to prevent infection. All emphysemas had completely resolved 2 weeks after the injury. A review of all 15 cases (including ours) showed that all patients were male and that 6 of the 15 (40.0%) cases were related to industrial accidents. Although emphysema was restricted to the subconjunctival space in 2 (13.3%) cases, it spread to the orbit in the remaining 13 (86.7%) cases. Cervical and mediastinal emphysemas were found in 3 (20.0%) cases, and intracranial emphysema was confirmed in 6 (40.0%) cases. Prophylactic antibiotics were used in most cases and the prognosis was generally good in all but one patient, who developed optic atrophy and blindness.

  11. Statistical analysis plan for the Pneumatic CompREssion for PreVENting Venous Thromboembolism (PREVENT) trial: a study protocol for a randomized controlled trial.

    PubMed

    Arabi, Yaseen; Al-Hameed, Fahad; Burns, Karen E A; Mehta, Sangeeta; Alsolamy, Sami; Almaani, Mohammed; Mandourah, Yasser; Almekhlafi, Ghaleb A; Al Bshabshe, Ali; Finfer, Simon; Alshahrani, Mohammed; Khalid, Imran; Mehta, Yatin; Gaur, Atul; Hawa, Hassan; Buscher, Hergen; Arshad, Zia; Lababidi, Hani; Al Aithan, Abdulsalam; Jose, Jesna; Abdukahil, Sheryl Ann I; Afesh, Lara Y; Dbsawy, Maamoun; Al-Dawood, Abdulaziz

    2018-03-15

    The Pneumatic CompREssion for Preventing VENous Thromboembolism (PREVENT) trial evaluates the effect of adjunctive intermittent pneumatic compression (IPC) with pharmacologic thromboprophylaxis compared to pharmacologic thromboprophylaxis alone on venous thromboembolism (VTE) in critically ill adults. In this multicenter randomized trial, critically ill patients receiving pharmacologic thromboprophylaxis will be randomized to an IPC or a no IPC (control) group. The primary outcome is "incident" proximal lower-extremity deep vein thrombosis (DVT) within 28 days after randomization. Radiologists interpreting the lower-extremity ultrasonography will be blinded to intervention allocation, whereas the patients and treating team will be unblinded. The trial has 80% power to detect a 3% absolute risk reduction in the rate of proximal DVT from 7% to 4%. Consistent with international guidelines, we have developed a detailed plan to guide the analysis of the PREVENT trial. This plan specifies the statistical methods for the evaluation of primary and secondary outcomes, and defines covariates for adjusted analyses a priori. Application of this statistical analysis plan to the PREVENT trial will facilitate unbiased analyses of clinical data. ClinicalTrials.gov , ID: NCT02040103 . Registered on 3 November 2013; Current controlled trials, ID: ISRCTN44653506 . Registered on 30 October 2013.

  12. An Auxiliary Gas Supply to Improve Safety During Aborted Dives with the Canadian Underwater Mine Countermeasures Apparatus (CUMA) (Un Systeme Auxiliaire D’approvisionnement en gaz Augmente la Securite des Plongeurs Utilisant L’appareil Canadien de Deminage Sous-marin (ACDSM) lors des Remontees D’urgence)

    DTIC Science & Technology

    2010-11-01

    Des expériences de validation ont été menées de juin 2002 à novembre 2003, au cours de quatre séries de plongées. Les données consignées par...Eaton; A.J. Ward; D.J. Woodward; DRDC Toronto TR 2010-081; R & D pour la défense Canada – Toronto; Novembre 2010. Introduction ou contexte: L’appareil...semaines, qui ont eu lieu de juin 2002 à novembre 2003. Un contrôle Doppler des participants aux fins de décompression et l’analyse continue des gaz

  13. An Interactive Graphics Program for Assistance in Learning Convolution.

    ERIC Educational Resources Information Center

    Frederick, Dean K.; Waag, Gary L.

    1980-01-01

    A program has been written for the interactive computer graphics facility at Rensselaer Polytechnic Institute that is designed to assist the user in learning the mathematical technique of convolving two functions. Because convolution can be represented graphically by a sequence of steps involving folding, shifting, multiplying, and integration, it…

  14. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor); Rahman, Zia-ur (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  15. Strength statistics of single crystals and metallic glasses under small stressed volumes

    DOE PAGES

    Gao, Yanfei; Bei, Hongbin

    2016-05-13

    It has been well documented that plastic deformation of crystalline and amorphous metals/alloys shows a general trend of “smaller is stronger”. The majority of the experimental and modeling studies along this line have been focused on finding and reasoning the scaling slope or exponent in the logarithmic plot of strength versus size. In contrast to this view, here we show that the universal picture should be the thermally activated nucleation mechanisms in small stressed volume, the stochastic behavior as to find the weakest links in intermediate sizes of the stressed volume, and the convolution of these two mechanisms with respectmore » to variables such as indenter radius in nanoindentation pop-in, crystallographic orientation, pre-strain level, sample length as in uniaxial tests, and others. Furthermore, experiments that cover the entire spectrum of length scales and a unified model that treats both thermal activation and spatial stochasticity have discovered new perspectives in understanding and correlating the strength statistics in a vast of observations in nanoindentation, micro-pillar compression, and fiber/whisker tension tests of single crystals and metallic glasses.« less

  16. Aggregated channels network for real-time pedestrian detection

    NASA Astrophysics Data System (ADS)

    Ghorban, Farzin; Marín, Javier; Su, Yu; Colombo, Alessandro; Kummert, Anton

    2018-04-01

    Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware. In order to alleviate this drawback, most strategies focus on using a two-stage cascade approach. Essentially, in the first stage a fast method generates a significant but reduced amount of high quality proposals that later, in the second stage, are evaluated by the CNN. In this work, we propose a novel detection pipeline that further benefits from the two-stage cascade strategy. More concretely, the enriched and subsequently compressed features used in the first stage are reused as the CNN input. As a consequence, a simpler network architecture, adapted for such small input sizes, allows to achieve real-time performance and obtain results close to the state-of-the-art while running significantly faster without the use of GPU. In particular, considering that the proposed pipeline runs in frame rate, the achieved performance is highly competitive. We furthermore demonstrate that the proposed pipeline on itself can serve as an effective proposal generator.

  17. Al Servicio del Diabetico no Vidente o Discapacitado Visual: Guia de Recursos para Consejeros Vocacionales de Rehabilitacion (Serving Individuals with Diabetes Who Are Blind or Visually Impaired: A Resource Guide for Vocational Rehabilitation Counselors).

    ERIC Educational Resources Information Center

    Bryant, Ed, Ed.

    Designed for Spanish-speaking vocational rehabilitation counselors, this book provides information about diabetes and treating diabetes. Much of the material previously appeared as articles in "Voice of the Diabetic" and is written not just by doctors and diabetes professionals, but also by members of the National Federation of the Blind…

  18. Steatorrhoea in rats with an intestinal cul-de-sac

    PubMed Central

    Hoet, P. P.; Eyssen, H.

    1964-01-01

    Steatorrhoea in rats with an intestinal cul-de-sac is mainly due to malabsorption of alimentary fats but faecal lipids of endogenous origin are also increased. Steatorrhoea depends on the site of the blind loop in the small intestine and is mainly caused by bacterial proliferation in the lumen of the gut. The aetiological role of Gram-positive anaerobic microbes, especially Clostridium welchii, is suggested. ImagesFIG. 3FIG. 4FIG. 5FIG. 6 PMID:14209913

  19. Multidimensional deconvolution of optical microscope and ultrasound imaging using adaptive least-mean-square (LMS) inverse filtering

    NASA Astrophysics Data System (ADS)

    Sapia, Mark Angelo

    2000-11-01

    Three-dimensional microscope images typically suffer from reduced resolution due to the effects of convolution, optical aberrations and out-of-focus blurring. Two- dimensional ultrasound images are also degraded by convolutional bluffing and various sources of noise. Speckle noise is a major problem in ultrasound images. In microscopy and ultrasound, various methods of digital filtering have been used to improve image quality. Several methods of deconvolution filtering have been used to improve resolution by reversing the convolutional effects, many of which are based on regularization techniques and non-linear constraints. The technique discussed here is a unique linear filter for deconvolving 3D fluorescence microscopy or 2D ultrasound images. The process is to solve for the filter completely in the spatial-domain using an adaptive algorithm to converge to an optimum solution for de-blurring and resolution improvement. There are two key advantages of using an adaptive solution: (1)it efficiently solves for the filter coefficients by taking into account all sources of noise and degraded resolution at the same time, and (2)achieves near-perfect convergence to the ideal linear deconvolution filter. This linear adaptive technique has other advantages such as avoiding artifacts of frequency-domain transformations and concurrent adaptation to suppress noise. Ultimately, this approach results in better signal-to-noise characteristics with virtually no edge-ringing. Many researchers have not adopted linear techniques because of poor convergence, noise instability and negative valued data in the results. The methods presented here overcome many of these well-documented disadvantages and provide results that clearly out-perform other linear methods and may also out-perform regularization and constrained algorithms. In particular, the adaptive solution is most responsible for overcoming the poor performance associated with linear techniques. This linear adaptive approach to deconvolution is demonstrated with results of restoring blurred phantoms for both microscopy and ultrasound and restoring 3D microscope images of biological cells and 2D ultrasound images of human subjects (courtesy of General Electric and Diasonics, Inc.).

  20. Multiple soliton production and the Korteweg-de Vries equation.

    NASA Technical Reports Server (NTRS)

    Hershkowitz, N.; Romesser, T.; Montgomery, D.

    1972-01-01

    Compressive square-wave pulses are launched in a double-plasma device. Their evolution is interpreted according to the Korteweg-de Vries description of Washimi and Taniuti. Square-wave pulses are an excitation for which an explicit solution of the Schrodinger equation permits an analytical prediction of the number and amplitude of emergent solitons. Bursts of energetic particles (pseudowaves) appear above excitation voltages greater than an electron thermal energy, and may be mistaken for solitons.

  1. Vulnerability Analysis of HD Photo Image Viewer Applications

    DTIC Science & Technology

    2007-09-01

    the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard in the digital photography market. With massive efforts...renamed to HD Photo in November of 2006, is being touted as the successor to the ubiquitous JPEG image format, as well as the eventual de facto standard...associated state-of-the-art compression algorithm “specifically designed [for] all types of continuous tone photographic” images [HDPhotoFeatureSpec

  2. Encyclopedia of Explosives and Related Items. Volume 6

    DTIC Science & Technology

    1974-01-01

    guns metal clad steel jacket, contg 10 grains of Similar unsatisfactory "cook-off’ prema- compressed incendiary mixture IM**11, de - tures were obtd...which essentially dampens the ex- tremely high pressures and provides the de - k sired distribution of forces to the metal part for forming. The metal ...flammable liquid or 1) Stationary type: gel contained in a strong metal reservoir is a) German Flammenwerfer contained 45 put under heavy pressure by

  3. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  4. An In-situ method for the study of strain broadening usingsynchrotronx-ray diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Chiu C.; Lynch, Peter A.; Cheary, Robert W.

    2006-12-15

    A tensonometer for stretching metal foils has beenconstructed for the study of strain broadening in x-ray diffraction lineprofiles. This device, which is designed for use on the powderdiffractometer in Station 2.3 at Daresbury Laboratory, allows in-situmeasurements to be performed on samples under stress. It can be used fordata collection in either transmission or reflection modes using eithersymmetric or asymmetric diffraction geometries. As a test case,measurements were carried out on a 18mum thick copper foil experiencingstrain levels of up to 5 percent using both symmetric reflection andsymmetric transmission diffraction. All the diffraction profilesdisplayed peak broadening and asymmetry which increased with strain.more » Themeasured profiles were analysed by the fundamental parameters approachusing the TOPAS peak fitting software. All the observed broadenedprofiles were modelled by convoluting a refineable diffraction profile,representing the dislocation and crystallite size broadening, with afixed instrumental profile pre-determined usinghigh quality LaB6reference powder. The de-convolution process yielded "pure" sampleintegral breadths and asymmetry results which displayed a strongdependence on applied strain and increased almost linearly with appliedstrain. Assuming crystallite size broadening in combination withdislocation broadening arising from fcc a/2<110>111 dislocations,we have extracted the variation of mechanic al property with strain. Theobservation of both peak asymmetry and broadening has been interpreted asa manifestation of a cellular structure with cell walls and cellinteriors possessing high and low dislocation densities.« less

  5. Isentropic compression of liquid metals near the melt line

    NASA Astrophysics Data System (ADS)

    Seagle, Christopher; Porwitzky, Andrew

    2017-06-01

    A series of experiments designed to study the liquid metal response to isentropic compression have been conducted at Sandia's Z Pulsed Power Facility. Cerium and Tin have been shock melted by driving a quasi-ballistic flyer into the samples followed by a ramp compression wave generated by an increased driving magnetic field. The sound speed of the liquid metals has been investigated with the purpose of exploring possible solidification on ramp compression. Additional surface sensitive diagnostics have been employed to search for signatures of solidification at the window interface. Results of these experiments will be discussed in relation to the existing equation of state models and phase diagrams for these materials as well as future plans for exploring the response of liquid metals near the melt line. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Asaadi, J.; Auger, M.; Bagby, L.; Baller, B.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Bugel, L.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; James, C.; de Vries, J. Jan; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Jones, B. J. P.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; von Rohr, C. Rudolf; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van de Water, R. G.; Viren, B.; Weber, M.; Weston, J.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-03-01

    We present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. We also address technical issues that arise when applying this technique to data from a large LArTPC at or near ground level.

  7. Rock images classification by using deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Cheng, Guojian; Guo, Wenhui

    2017-08-01

    Granularity analysis is one of the most essential issues in authenticate under microscope. To improve the efficiency and accuracy of traditional manual work, an convolutional neural network based method is proposed for granularity analysis from thin section image, which chooses and extracts features from image samples while build classifier to recognize granularity of input image samples. 4800 samples from Ordos basin are used for experiments under colour spaces of HSV, YCbCr and RGB respectively. On the test dataset, the correct rate in RGB colour space is 98.5%, and it is believable in HSV and YCbCr colour space. The results show that the convolution neural network can classify the rock images with high reliability.

  8. Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akabani, G.; Hawkins, W.G.; Eckblade, M.B.

    1999-01-01

    The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less

  9. Convolute laminations — a theoretical analysis: example of a Pennsylvanian sandstone

    NASA Astrophysics Data System (ADS)

    Visher, Glenn S.; Cunningham, Russ D.

    1981-03-01

    Data from an outcropping laminated interval were collected and analyzed to test the applicability of a theoretical model describing instability of layered systems. Rayleigh—Taylor wave perturbations result at the interface between fluids of contrasting density, viscosity, and thickness. In the special case where reverse density and viscosity interlaminations are developed, the deformation response produces a single wave with predictable amplitudes, wavelengths, and amplification rates. Physical measurements from both the outcropping section and modern sediments suggest the usefulness of the model for the interpretation of convolute laminations. Internal characteristics of the stratigraphic interval, and the developmental sequence of convoluted beds, are used to document the developmental history of these structures.

  10. Detecting of foreign object debris on airfield pavement using convolution neural network

    NASA Astrophysics Data System (ADS)

    Cao, Xiaoguang; Gu, Yufeng; Bai, Xiangzhi

    2017-11-01

    It is of great practical significance to detect foreign object debris (FOD) timely and accurately on the airfield pavement, because the FOD is a fatal threaten for runway safety in airport. In this paper, a new FOD detection framework based on Single Shot MultiBox Detector (SSD) is proposed. Two strategies include making the detection network lighter and using dilated convolution, which are proposed to better solve the FOD detection problem. The advantages mainly include: (i) the network structure becomes lighter to speed up detection task and enhance detection accuracy; (ii) dilated convolution is applied in network structure to handle smaller FOD. Thus, we get a faster and more accurate detection system.

  11. Coding performance of the Probe-Orbiter-Earth communication link

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Dolinar, S.; Pollara, F.

    1993-01-01

    The coding performance of the Probe-Orbiter-Earth communication link is analyzed and compared for several cases. It is assumed that the coding system consists of a convolutional code at the Probe, a quantizer and another convolutional code at the Orbiter, and two cascaded Viterbi decoders or a combined decoder on the ground.

  12. Identification and Classification of Orthogonal Frequency Division Multiple Access (OFDMA) Signals Used in Next Generation Wireless Systems

    DTIC Science & Technology

    2012-03-01

    advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating

  13. Sequential Syndrome Decoding of Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    The algebraic structure of convolutional codes are reviewed and sequential syndrome decoding is applied to those codes. These concepts are then used to realize by example actual sequential decoding, using the stack algorithm. The Fano metric for use in sequential decoding is modified so that it can be utilized to sequentially find the minimum weight error sequence.

  14. Lunar Circular Structure Classification from Chang 'e 2 High Resolution Lunar Images with Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Zeng, X. G.; Liu, J. J.; Zuo, W.; Chen, W. L.; Liu, Y. X.

    2018-04-01

    Circular structures are widely distributed around the lunar surface. The most typical of them could be lunar impact crater, lunar dome, et.al. In this approach, we are trying to use the Convolutional Neural Network to classify the lunar circular structures from the lunar images.

  15. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  16. A pre-trained convolutional neural network based method for thyroid nodule diagnosis.

    PubMed

    Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing

    2017-01-01

    In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Enhancement of digital radiography image quality using a convolutional neural network.

    PubMed

    Sun, Yuewen; Li, Litao; Cong, Peng; Wang, Zhentao; Guo, Xiaojing

    2017-01-01

    Digital radiography system is widely used for noninvasive security check and medical imaging examination. However, the system has a limitation of lower image quality in spatial resolution and signal to noise ratio. In this study, we explored whether the image quality acquired by the digital radiography system can be improved with a modified convolutional neural network to generate high-resolution images with reduced noise from the original low-quality images. The experiment evaluated on a test dataset, which contains 5 X-ray images, showed that the proposed method outperformed the traditional methods (i.e., bicubic interpolation and 3D block-matching approach) as measured by peak signal to noise ratio (PSNR) about 1.3 dB while kept highly efficient processing time within one second. Experimental results demonstrated that a residual to residual (RTR) convolutional neural network remarkably improved the image quality of object structural details by increasing the image resolution and reducing image noise. Thus, this study indicated that applying this RTR convolutional neural network system was useful to improve image quality acquired by the digital radiography system.

  18. Alcoholism Detection by Data Augmentation and Convolutional Neural Network with Stochastic Pooling.

    PubMed

    Wang, Shui-Hua; Lv, Yi-Ding; Sui, Yuxiu; Liu, Shuai; Wang, Su-Jing; Zhang, Yu-Dong

    2017-11-17

    Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.

  19. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  20. Lp-stability (1 less than or equal to p less than or equal to infinity) of multivariable nonlinear time-varying feedback systems that are open-loop unstable. [noting unstable convolution subsystem forward control and time varying nonlinear feedback

    NASA Technical Reports Server (NTRS)

    Callier, F. M.; Desoer, C. A.

    1973-01-01

    A class of multivariable, nonlinear time-varying feedback systems with an unstable convolution subsystem as feedforward and a time-varying nonlinear gain as feedback was considered. The impulse response of the convolution subsystem is the sum of a finite number of increasing exponentials multiplied by nonnegative powers of the time t, a term that is absolutely integrable and an infinite series of delayed impulses. The main result is a theorem. It essentially states that if the unstable convolution subsystem can be stabilized by a constant feedback gain F and if incremental gain of the difference between the nonlinear gain function and F is sufficiently small, then the nonlinear system is L(p)-stable for any p between one and infinity. Furthermore, the solutions of the nonlinear system depend continuously on the inputs in any L(p)-norm. The fixed point theorem is crucial in deriving the above theorem.

  1. Deep learning for tumor classification in imaging mass spectrometry.

    PubMed

    Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter

    2018-04-01

    Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.

  2. Transverse Compression Response of a Multi-Ply Kevlar Vest

    DTIC Science & Technology

    2004-09-01

    S BAZHENOV KOSYGIN STREET 4 117 977 MOSCOW RUSSIA 1 UNIV POLITECNICA MADRID B PARGA-LANDA ARQUITECTURA CONSTRUC ETSI NAVALES...28040 MADRID SPAIN 1 UNIV POLITECNICA MADRID F HERNANDEZ-OLIVARES CONSTRUC TEC ARQUITEC ETS ARQUITECTURA AV JUAN DE HERRERA 4

  3. Microscale investigation of dynamic impact of dry and saturated glass powder

    NASA Astrophysics Data System (ADS)

    Herbold, Eric; Crum, Ryan; Hurley, Ryan; Lind, Jonathan; Homel, Michael; Akin, Minta

    2017-06-01

    The response of particulate materials to shock loading involves complex interactions between grains involving fracture/comminution and possible interstitial material. The strength of saturated powders is attributed to ``effective stress'' where the fluid stiffens the material response and reduces the shear strength. However, detailed information regarding the effects of saturation under dynamic loading is lacking since static equilibrium between phases cannot be assumed and the interaction becomes more complex. Recent experiments at the dynamic compression sector (DCS) have captured in-situ images of shock loaded soda lime glass spheres in dry and saturated conditions. The differences between the modes of deformation and compaction are compared with mesoscale simulations to help develop our ideas about the observed response. This work was performed under the auspices of the U.S. Department of Energy (DOE) by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LDRD tracking code 16-ERD-010. The Dynamic Compression Sector (DCS, sector 35) is supported by DOE/NNSA Award Number DE-NA0002442. The use of Advanced Photon Source is operated by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.

  4. Animation-assisted CPRII program as a reminder tool in achieving effective one-person-CPR performance.

    PubMed

    Choa, Minhong; Cho, Junho; Choi, Young Hwan; Kim, Seungho; Sung, Ji Min; Chung, Hyun Soo

    2009-06-01

    The objective of this study is to compare the skill retention of two groups of lay persons, six months after their last CPR training. The intervention group was provided with animation-assisted CPRII (AA-CPRII) instruction on their cellular phones, and the control group had nothing but what they learned from their previous training. This study was a single blind randomized controlled trial. The participants' last CPR trainings were held at least six months ago. We revised our CPR animation for on-site CPR instruction content emphasizing importance of chest compression. Participants were randomized into two groups, the AA-CPRII group (n=42) and the control group (n=38). Both groups performed three cycles of CPR and their performances were video recorded. These video clips were assessed by three evaluators using a checklist. The psychomotor skills were evaluated using the ResusciAnne SkillReporter. Using the 30-point scoring checklist, the AA-CPRII group had a significantly better score compared to the control group (p<0.001). Psychomotor skills evaluated with the AA-CPRII group demonstrated better performance in hand positioning (p=0.025), compression depth (p=0.035) and compression rate (p<0.001) than the control group. The AA-CPRII group resulted in better checklist scores, including chest compression rate, depth and hand positioning. Animation-assisted CPR could be used as a reminder tool in achieving effective one-person-CPR performance. By installing the CPR instruction on cellular phones and having taught them CPR with it during the training enabled participants to perform better CPR.

  5. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    PubMed

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  6. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  7. The wasteland of random supergravities

    NASA Astrophysics Data System (ADS)

    Marsh, David; McAllister, Liam; Wrase, Timm

    2012-03-01

    We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.

  8. Seismically-induced soft-sediment deformation structures associated with the Magallanes-Fagnano Fault System (Isla Grande de Tierra del Fuego, Argentina)

    NASA Astrophysics Data System (ADS)

    Onorato, M. Romina; Perucca, Laura; Coronato, Andrea; Rabassa, Jorge; López, Ramiro

    2016-10-01

    In this paper, evidence of paleoearthquake-induced soft-sediment deformation structures associated with the Magallanes-Fagnano Fault System in the Isla Grande de Tierra del Fuego, southern Argentina, has been identified. Well-preserved soft-sediment deformation structures were found in a Holocene sequence of the Udaeta pond. These structures were analyzed in terms of their geometrical characteristics, deformation mechanism, driving force system and possible trigger agent. They were also grouped in different morphological types: sand dykes, convolute lamination, load structures and faulted soft-sediment deformation features. Udaeta, a small pond in Argentina Tierra del Fuego, is considered a Quaternary pull-apart basin related to the Magallanes-Fagnano Fault System. The recognition of these seismically-induced features is an essential tool for paleoseismic studies. Since the three main urban centers in the Tierra del Fuego province of Argentina (Ushuaia, Río Grande and Tolhuin) have undergone an explosive growth in recent years, the results of this study will hopefully contribute to future analyses of the seismic risk of the region.

  9. A Differential Evolution-Based Routing Algorithm for Environmental Monitoring Wireless Sensor Networks

    PubMed Central

    Li, Xiaofang; Xu, Lizhong; Wang, Huibin; Song, Jie; Yang, Simon X.

    2010-01-01

    The traditional Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol is a clustering-based protocol. The uneven selection of cluster heads results in premature death of cluster heads and premature blind nodes inside the clusters, thus reducing the overall lifetime of the network. With a full consideration of information on energy and distance distribution of neighboring nodes inside the clusters, this paper proposes a new routing algorithm based on differential evolution (DE) to improve the LEACH routing protocol. To meet the requirements of monitoring applications in outdoor environments such as the meteorological, hydrological and wetland ecological environments, the proposed algorithm uses the simple and fast search features of DE to optimize the multi-objective selection of cluster heads and prevent blind nodes for improved energy efficiency and system stability. Simulation results show that the proposed new LEACH routing algorithm has better performance, effectively extends the working lifetime of the system, and improves the quality of the wireless sensor networks. PMID:22219670

  10. Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.

    PubMed

    Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D

    2008-05-01

    The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.

  11. SU-E-T-371: Evaluating the Convolution Algorithm of a Commercially Available Radiosurgery Irradiator Using a Novel Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cates, J; Drzymala, R

    2015-06-15

    Purpose: The purpose of this study was to develop and use a novel phantom to evaluate the accuracy and usefulness of the Leskell Gamma Plan convolution-based dose calculation algorithm compared with the current TMR10 algorithm. Methods: A novel phantom was designed to fit the Leskell Gamma Knife G Frame which could accommodate various materials in the form of one inch diameter, cylindrical plugs. The plugs were split axially to allow EBT2 film placement. Film measurements were made during two experiments. The first utilized plans generated on a homogeneous acrylic phantom setup using the TMR10 algorithm, with various materials inserted intomore » the phantom during film irradiation to assess the effect on delivered dose due to unplanned heterogeneities upstream in the beam path. The second experiment utilized plans made on CT scans of different heterogeneous setups, with one plan using the TMR10 dose calculation algorithm and the second using the convolution-based algorithm. Materials used to introduce heterogeneities included air, LDPE, polystyrene, Delrin, Teflon, and aluminum. Results: The data shows that, as would be expected, having heterogeneities in the beam path does induce dose delivery error when using the TMR10 algorithm, with the largest errors being due to the heterogeneities with electron densities most different from that of water, i.e. air, Teflon, and aluminum. Additionally, the Convolution algorithm did account for the heterogeneous material and provided a more accurate predicted dose, in extreme cases up to a 7–12% improvement over the TMR10 algorithm. The convolution algorithm expected dose was accurate to within 3% in all cases. Conclusion: This study proves that the convolution algorithm is an improvement over the TMR10 algorithm when heterogeneities are present. More work is needed to determine what the heterogeneity size/volume limits are where this improvement exists, and in what clinical and/or research cases this would be relevant.« less

  12. Improvement of anthropometric and biochemical, but not of vitamin A, status in adolescents who undergo Roux-en-Y gastric bypass: a 1-year follow up study.

    PubMed

    Silva, Jacqueline Souza; Chaves, Gabriela Villaça; Stenzel, Ana Paula; Pereira, Silvia Elaine; Saboya, Carlos José; Ramalho, Andréa

    2017-02-01

    The aim of this study was to describe anthropometric, biochemical, co-morbidity, and vitamin A nutritional status in severely obese adolescents before and 30, 180, and 365 days after Roux-en-Y gastric bypass (RYGB). Federal University of Rio de Janeiro, Rio de Janeiro, Brazil. Sixty-four adolescents (15-19 years old) with a body mass index≥40 kg/m 2 were enrolled in a prospective follow-up study. Vitamin A status was evaluated before surgery (T0), and 30 (T30), 180 (T180), and 365 (T365) days after surgery, applying biochemical and functional indicators. Anthropometric measures, lipid profile, glycemia, and basal insulin also were assessed. No patients were lost during follow-up. Before surgery, 26.6% of the sample group experienced vitamin A deficiency (VAD). Serum retinol levels dropped significantly 30 days after surgery and then returned to basal levels. There was a significant increase in the prevalence of β-carotene deficiency and night blindness throughout the postsurgery period. A significant reduction in blood glucose, insulin resistance, lipid profile, and anthropometric parameters was observed. The finding that oral daily supplementation with 5000 IU retinol acetate failed to reverse VAD and night blindness after RYGB is highly significant. We recommend assessment of VAD and night blindness in extremely obese adolescents before and after RYGB. We further recommend monitoring for an additional 180 days (for VAD) and 365 days (for night blindness) after surgery, with particular attention to daily supplementation needs. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  13. An Approach Toward Synthesis of Bridgmanite in Dynamic Compression Experiments

    NASA Astrophysics Data System (ADS)

    Reppart, J. J.

    2015-12-01

    Bridgmanite occurs in heavily shocked meteorites and provides a useful constraint on pressure-temperature conditions during shock-metamorphism. Its occurrence also provides constraints on the shock release path. Shock-release and shock duration are important parameters in estimating the size of impactors that generate the observed shock metamorphic record. Thus, it is timely to examine if bridgmanite can be synthesized in dynamic compression experiments with the goal of establishing a correlation between shock duration and grainsize. Up to now only one high pressure polymorph of an Mg-silicate has been synthesized AND recovered in a shock experiment (wadsleyite). Therefore, it is not given that shock synthesis of bridgmanite is possible. This project started recently, so we present an outline of shock experiment designs and potentially results from the first experiments. FUNDING ACKNOWLEDGMENT UNLV HiPSEC: This research was sponsored (or sponsored in part) by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through DOE Cooperative Agreement #DE-NA0001982. HPCAT: "[Portions of this work were]/[This work was] performed at HPCAT (Sector 16), Advanced Photon Source (APS), Argonne National Laboratory. HPCAT operations are supported by DOE-NNSA under Award No. DE-NA0001974 and DOE-BES under Award No. DE-FG02-99ER45775, with partial instrumentation funding by NSF. APS is supported by DOE-BES, under Contract No. DE-AC02-06CH11357."

  14. Texturation à froid sous contraintes triaxiales de phase à haute T_c de Bi(Pb)SrCaCuO préréagie

    NASA Astrophysics Data System (ADS)

    Langlois, P.; Massat, H.; Suryanarayanan, R.

    1994-11-01

    The alignment of grains in isostatically precompacted samples of prereacted Bi{1,8}Pb{0,4}Sr{2,0}Ca{2,2}Cu{3,0}O{10,3 + x} powder has been achieved by compressive plastic deformation under isostatic pressure at room temperature. Isostatic pressures were in the range 0.1 to 1 GPa and deformation rates were led up to 57 %. Prior to sintering, X-ray diffraction measurements corroborate an expected high- T_c phase purity of nearly 85 % and indicate that the as-deformed samples have been textured with the (c-axes parallel to the pressing direction whilst a.c. susceptibility measurements ascertain a high transition temperature around 107 K. Intergranular connection does not occur until sintering at 850 ^{circ}C for 80 h and measurements indicate then that the texture has been retained. Superconducting properties themselves show sensitivity to texture through anisotropy-related distinctive irreversibility lines. L'alignement de grains de poudre Bi{1,8}Pb{0,4}Sr{2,0}Ca{2,2}Cu{3,0}O{10,3 + x} préréagie a été réalisé par déformation plastique à température ambiante d'échantillons précompactés isostatiquement et comprimés sous pression isostatique, la gamme des pressions isostatiques allant de 0,1 à 1 GPa et les taux de déformation atteignant 57 %. Les mesures de diffraction de rayons X corroborent la pureté de phase à haute T_c proche de 85 % attendue et indiquent que les échantillons ainsi déformés ont été texturés avec les plans ab perpendiculaires à la direction de compression. Les mesures de susceptibilité alternative avèrent une température élevée de transition à environ 107 K mais la connexion intergranulaire n'est assurée qu'après un frittage à 850 ^{circ}C pendant 80 h dont on vérifie qu'il conserve la texture. Enfin, la sensibilité des propriétés supraconductrices à la texturation est évaluée par le biais de lignes d'irréversibilité distinctes en fonction de l'anisotropie.

  15. Retrobulbar chlorpromazine in management of painful eye in blind or low vision patients.

    PubMed

    Ortiz, A; Galvis, V; Tello, A; Miro-Quesada, J J; Barrera, R; Ochoa, M

    2017-04-01

    To evaluate the results of applying retrobulbar chlorpromazine in the management of patients with painful blind eyes or with very poor vision. A retrospective, descriptive review was carried out on the medical records of 33 patients who were treated with a retrobulbar injection of chlorpromazine (25mg) for the management of painful blind eyes in Centro Oftalmológico Virgilio Galvis. Pain control was achieved in 90% of cases (with mean follow-up of 2.1 years). The mean intraocular pressure decreased by 37%. In 7 out of 12 eyes that maintained residual vision, loss of some degree of vision was acknowledged. One patient required an additional cyclodestructive procedure, another one required an absolute alcohol injection, and in an additional case evisceration surgery was necessary to achieve pain control. No serious complications were noted with this therapy. Retrobulbar injection of chlorpromazine is a valid option in painful, blind eye cases (or with very poor vision) with a poor visual prognosis. Copyright © 2016 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  17. Discrete transparent boundary conditions for the mixed KDV-BBM equation

    NASA Astrophysics Data System (ADS)

    Besse, Christophe; Noble, Pascal; Sanchez, David

    2017-09-01

    In this paper, we consider artificial boundary conditions for the linearized mixed Korteweg-de Vries (KDV) and Benjamin-Bona-Mahoney (BBM) equation which models water waves in the small amplitude, large wavelength regime. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we propose a new, stable and fairly general strategy to carry out this crucial step in the design of transparent boundary conditions. For large time simulations, we also introduce a methodology based on the asymptotic expansion of coefficients involved in exact direct transparent boundary conditions. We illustrate the accuracy of our methods for Gaussian and wave packets initial data.

  18. Symposium on Turbulent Shear Flows (8th) Held in Munich, Germany on 9-11 September 1991. Volume 2. Sessions 19-31, Poster Sessions

    DTIC Science & Technology

    1991-09-01

    FLAMES IN THE ATrR)S1]1 IERE USING A SECOND MOMNT TURBULENCE MODEL Hermilo RAMIREZ -LEON, Claude REY and Jean-Frangois SINI LABORATOIRE DE MECANIQUE DES...65 directly satisfied by an extended version of the C= F , ax6 aX5 artificial compressibility implicit method ( Ramirez -Leon et al,, 1991). 08 2 IR ap...the isotropization-of-production concept. For the compressible fluid case, Ramirez -Leon et al (1990) With regard to Eq (2), the right-hand side of

  19. VLSI single-chip (255,223) Reed-Solomon encoder with interleaver

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Deutsch, Leslie J. (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor)

    1990-01-01

    The invention relates to a concatenated Reed-Solomon/convolutional encoding system consisting of a Reed-Solomon outer code and a convolutional inner code for downlink telemetry in space missions, and more particularly to a Reed-Solomon encoder with programmable interleaving of the information symbols and code correction symbols to combat error bursts in the Viterbi decoder.

  20. Deep feature representation with stacked sparse auto-encoder and convolutional neural network for hyperspectral imaging-based detection of cucumber defects

    USDA-ARS?s Scientific Manuscript database

    It is challenging to achieve rapid and accurate processing of large amounts of hyperspectral image data. This research was aimed to develop a novel classification method by employing deep feature representation with the stacked sparse auto-encoder (SSAE) and the SSAE combined with convolutional neur...

  1. A Real-Time Convolution Algorithm and Architecture with Applications in SAR Processing

    DTIC Science & Technology

    1993-10-01

    multidimensional lOnnulation of the DFT and convolution. IEEE-ASSP, ASSP-25(3):239-242, June 1977. [6] P. Hoogenboom et al. Definition study PHARUS: final...algorithms and Ihe role of lhe tensor product. IEEE-ASSP, ASSP-40( 1 2):292 J-2930, December 1992. 181 P. Hoogenboom , P. Snoeij. P.J. Koomen. and H

  2. Two-level convolution formula for nuclear structure function

    NASA Astrophysics Data System (ADS)

    Ma, Boqiang

    1990-05-01

    A two-level convolution formula for the nuclear structure function is derived in considering the nucleus as a composite system of baryon-mesons which are also composite systems of quark-gluons again. The results show that the European Muon Colaboration effect can not be explained by the nuclear effects as nucleon Fermi motion and nuclear binding contributions.

  3. DSN telemetry system performance with convolutionally code data

    NASA Technical Reports Server (NTRS)

    Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.

    1975-01-01

    The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.

  4. Introducing DeBRa: a detailed breast model for radiological studies

    NASA Astrophysics Data System (ADS)

    Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.

    2009-07-01

    Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.

  5. Two-dimensional convolute integers for analytical instrumentation

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1982-01-01

    As new analytical instruments and techniques emerge with increased dimensionality, a corresponding need is seen for data processing logic which can appropriately address the data. Two-dimensional measurements reveal enhanced unknown mixture analysis capability as a result of the greater spectral information content over two one-dimensional methods taken separately. It is noted that two-dimensional convolute integers are merely an extension of the work by Savitzky and Golay (1964). It is shown that these low-pass, high-pass and band-pass digital filters are truly two-dimensional and that they can be applied in a manner identical with their one-dimensional counterpart, that is, a weighted nearest-neighbor, moving average with zero phase shifting, convoluted integer (universal number) weighting coefficients.

  6. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  7. Airplane detection in remote sensing images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Ouyang, Chao; Chen, Zhong; Zhang, Feng; Zhang, Yifei

    2018-03-01

    Airplane detection in remote sensing images remains a challenging problem and has also been taking a great interest to researchers. In this paper we propose an effective method to detect airplanes in remote sensing images using convolutional neural networks. Deep learning methods show greater advantages than the traditional methods with the rise of deep neural networks in target detection, and we give an explanation why this happens. To improve the performance on detection of airplane, we combine a region proposal algorithm with convolutional neural networks. And in the training phase, we divide the background into multi classes rather than one class, which can reduce false alarms. Our experimental results show that the proposed method is effective and robust in detecting airplane.

  8. Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2017-03-14

    Here, we present several studies of convolutional neural networks applied to data coming from the MicroBooNE detector, a liquid argon time projection chamber (LArTPC). The algorithms studied include the classification of single particle images, the localization of single particle and neutrino interactions in an image, and the detection of a simulated neutrino event overlaid with cosmic ray backgrounds taken from real detector data. These studies demonstrate the potential of convolutional neural networks for particle identification or event detection on simulated neutrino interactions. Lastly, we also address technical issues that arise when applying this technique to data from a large LArTPCmore » at or near ground level.« less

  9. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  10. Gas Classification Using Deep Convolutional Neural Networks.

    PubMed

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  11. Gas Classification Using Deep Convolutional Neural Networks

    PubMed Central

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  12. Applications of deep convolutional neural networks to digitized natural history collections.

    PubMed

    Schuettpelz, Eric; Frandsen, Paul B; Dikow, Rebecca B; Brown, Abel; Orli, Sylvia; Peters, Melinda; Metallo, Adam; Funk, Vicki A; Dorr, Laurence J

    2017-01-01

    Natural history collections contain data that are critical for many scientific endeavors. Recent efforts in mass digitization are generating large datasets from these collections that can provide unprecedented insight. Here, we present examples of how deep convolutional neural networks can be applied in analyses of imaged herbarium specimens. We first demonstrate that a convolutional neural network can detect mercury-stained specimens across a collection with 90% accuracy. We then show that such a network can correctly distinguish two morphologically similar plant families 96% of the time. Discarding the most challenging specimen images increases accuracy to 94% and 99%, respectively. These results highlight the importance of mass digitization and deep learning approaches and reveal how they can together deliver powerful new investigative tools.

  13. A convolutional neural network neutrino event classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, A.; Radovic, A.; Rocco, D.

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  14. Clinicopathologic correlations in Alibert-type mycosis fungoides.

    PubMed

    Eng, A M; Blekys, I; Worobec, S M

    1981-06-01

    Five cases of mycosis fungoides of the Alibert type were studied by taking multiple biopsy specimens at different stages of the disease. Large hyperchromatic, slightly irregular mononuclear cells are the most frequent cells. Ultrastructurally, the cells were only slightly convoluted, had prominent heterochromatin banding at the nuclear membrane, and unremarkable cytoplasmic organelles. Highly convoluted cerebriform nucleated cells were few. Large regular vesicular histiocytes were prominent in the early stages. Ultrastructurally, the cells showed evenly distributed euchromatin. Epidermotrophism was equally as important as Pautrier's abscess as a hallmark of the disease. Stereologic techniques comparing the infiltrate with regard to size and convolution of cells in all stages of mycosis fungoides with infiltrates seen in a variety of benign dermatoses showed no statistically significant differences.

  15. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  16. Toward Sodium X-Ray Diffraction in the High-Pressure Regime

    NASA Astrophysics Data System (ADS)

    Gong, X.; Polsin, D. N.; Rygg, J. R.; Boehly, T. R.; Crandall, L.; Henderson, B. J.; Hu, S. X.; Huff, M.; Saha, R.; Collins, G. W.; Smith, R.; Eggert, J.; Lazicki, A. E.; McMahon, M.

    2017-10-01

    We are working to quasi-isentropically compress sodium into the terapascal regime to test theoretical predictions that sodium transforms to an electride. A series of hydrodynamic simulations have been performed to design experiments to investigate the structure and optical properties of sodium at pressures up to 500 GPa. We show preliminary results where sodium samples, sandwiched between diamond plates and lithium-fluoride windows, are ramp compressed by a gradual increase in the drive-laser intensity. The low sound speed in sodium makes it particularly susceptible to forming a shock; therefore, it is difficult to compress without melting the sample. Powder x-ray diffraction is used to provide information on the structure of sodium at these high pressures. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  17. Simulations of Bubble Motion in an Oscillating Liquid

    NASA Astrophysics Data System (ADS)

    Kraynik, A. M.; Romero, L. A.; Torczynski, J. R.

    2010-11-01

    Finite-element simulations are used to investigate the motion of a gas bubble in a liquid undergoing vertical vibration. The effect of bubble compressibility is studied by comparing "compressible" bubbles that obey the ideal gas law with "incompressible" bubbles that are taken to have constant volume. Compressible bubbles exhibit a net downward motion away from the free surface that does not exist for incompressible bubbles. Net (rectified) velocities are extracted from the simulations and compared with theoretical predictions. The dependence of the rectified velocity on ambient gas pressure, bubble diameter, and bubble depth are in agreement with the theory. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Seismic data compression speeds exploration projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galibert, P.Y.

    As part of an ongoing commitment to ensure industry-wide distribution of its revolutionary seismic data compression technology, Chevron Petroleum Technology Co. (CPTC) has entered into licensing agreements with Compagnie Generale de Geophysique (CGG) and other seismic contractors for use of its software in oil and gas exploration programs. CPTC expects use of the technology to be far-reaching to all of its industry partners involved in seismic data collection, processing, analysis and storage. Here, CGG--one of the world`s leading seismic acquisition and processing companies--talks about its success in applying the new methodology to replace full on-board seismic processing. Chevron`s technology ismore » already being applied on large off-shore 3-D seismic surveys. Worldwide, CGG has acquired more than 80,000 km of seismic data using the data compression technology.« less

  19. Prise en charge des orbitopathies dysthyroidiennes modérées et sévères: à propos de 22 cas

    PubMed Central

    Daldoul, Nadia; Knani, Leila; Gatfaoui, Faten; Mahjoub, Hechmi

    2017-01-01

    Décrire la prise en charge thérapeutique des orbitopathies dysthyroidiennes modérées et sévères et évaluer par une étude statistique les facteurs associés à la neuropathie optique ainsi que les facteurs de mauvais pronostic visuel. Nous avons mené une étude rétrospective sur 22 patients présentant une ophtalmopathie dysthyroidienne modérée à sévère sur au moins un oeil, hospitalisés au service d'ophtalmologie du CHU Farhat Hached Sousse, sur une période s'étalant de 1998 à 2015. Les indications thérapeutiques sont basées sur les critères d'activité et de sévérité de l'Eugogo ainsi que l'évaluation des facteurs de mauvais pronostic visuel. L'âge moyen de nos patients était de 40 ans avec une légère prédominance masculine (54.5%). 68.2% des patients étaient en euthyroidie, 18.2% étaient tabagique. Le facteur le plus associé significativement à la neuropathie est la compression au niveau de l'apex orbitaire (P = 0.03). Le traitement était basé sur la corticothérapie intraveineuse et/ou la décompression orbitaire en fonction de l'activité et la sévérité de la maladie. L'évolution globale après traitement a été marquée par une amélioration des signes inflammatoires, réduction de l'exophtalmie. Le pronostic visuel était plus mauvais chez les patients plus âgés (P = 0.0001), de sexe masculin (P = 0.03) et traités par irathérapie (P = 0.04). Dans les limites d'une étude rétrospective, nos résultats étaient globalement concordants avec la littérature. L'orbitopathie dysthyroidienne reste une maladie dont l'évaluation et la prise en charge thérapeutique sont non encore bien élucidées. Des études de cohortes, probablement multicentriques, sont à envisager pour améliorer la prise en charge. PMID:29187926

  20. Integrated Fast-Ignition Core-Heating Experiments on OMEGA

    NASA Astrophysics Data System (ADS)

    Theobald, W.

    2010-11-01

    Integrated fast-ignition core-heating experiments are carried out at the Omega Laser Facility. Plastic (CD) shell targets with a re-entrant gold cone are compressed with a ˜20-kJ, UV low-adiabat laser pulse. A 1-kJ, 10-ps pulse from OMEGA EP generates fast electrons in the hollow cone that are transported into the compressed core. The experiments demonstrate a significant enhancement of the neutron yield. The neutron-yield enhancement caused by the high-intensity pulse is 1.5 x 10^7, which is more than 150% of the implosion yield. For the first time, measurements of the breakout time of the compression-induced shock wave through the cone were performed for the same targets as used in the integrated experiments. The shock breakout was measured to be ˜100 ps after peak neutron production. The experiments demonstrate that the cone tip is intact at the time when the short-pulse laser interacts with the cone. This work was supported by the U.S. Department of Energy Office of Inertial Confinement Fusion under Cooperative Agreement Nos. DE-FC52-08NA28302, DE-FC02-04ER54789, and DE-FG02-05ER54839. [4pt] In collaboration with A. A. Solodov, K. S. Anderson, R. Betti (LLE/FSC); C. Stoeckl, T.R. Boehly, R.S. Craxton, J.A. Delettrez, V.Yu. Glebov, J.P. Knauer, F.J. Marshall, K.L. Marshall, D.D. Meyerhofer,^ P.M. Nilson, T.C. Sangster, W. Seka (LLE); F.N. Beg (UCSD), H. Habara (ILE), P.K. Patel (LLNL), R.B. Stephens (GA); J.A. Frenje, N. Sinenian (PSFC/MIT).

  1. Zn influence on the plasticity of Cdo{0.96}Zn{0.04}Te

    NASA Astrophysics Data System (ADS)

    Imhoff, D.; Zozime, A.; Triboulet, R.

    1991-11-01

    Compression tests were performed on CdTe and Cd{0.96}Zn{0.04}Te to elucidate the mechanism through which Zn inhibits dislocation formation and motion during CdTe crystal growth, thus leading to a decreasing of the dislocation density. Uniaxial deformation experiments performed with CdTe and CdZnTe at constant strain rate within a wide temperature range (0. 14;T_m le T le 0.87;T_m,;T_m = 1 365; K), have revealed a strong hardening effect of Zn within the whole temperature range. They also showed in CdZnTe a Portevin Le Chatelier effect between 770 K and 920 K confirmed by static strain aging experiments. Critical resolved shear stress (C.R.S.S.) values at T = 195; K and static strain aging results with CdZnTe point to size effect as the dominant interaction between Zn and dislocations. Thermal activation parameters were estimated in both materials. La déformation plastique a été utilisée comme approche des mécanismes par lesquels le zinc entrave le mouvement des dislocations au cours du processus de croissance cristalline de CdTe massif, réduisant ainsi la densité de dislocations. Les expériences de compression uniaxiale à vitesse constante, réalisées dans CdTe et CdZnTe entre 0,14 T_f et 0,87 T_f ont montré que le zinc est responsable d'un fort durcissement sur tout le domaine de températures étudié. Les expériences de déformation dans CdZnTe ont mis en évidence un phénomène du type Portevin Le Chatelier entre 770 K et 920 K, confirmé par des expériences de vieillissement statique. Les valeurs de scission critique tau_c à 195 K et les résultats des expériences de vieillissement statique dans CdZnTe sont compatibles avec un effet de taille dominant pour les interactions Zndislocations. Les paramètres d'activation thermique ont été estimés dans les deux matériaux.

  2. [Constitutional syndrome as a presentation of a cerebellopontine meningioma].

    PubMed

    Ruiz-Serrato, A; Mata-Palma, A; Olmedo-Llanes, J; García-Ordóñez, M A

    2014-03-01

    Meningiomas are basically benign tumours arising in the meninges and account for 15-25% of intracranial tumours in adults. It is clinically signs are due to compression of the neighbouring structures, with the main symptoms being migraine, behavioural changes, and neurological deficits. We present a case where constitutional syndrome was the first and principal manifestation of an intracranial cerebellopontine meningioma. Copyright © 2012 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.

  3. Investigation of Compressible Fluids for Use in soft Recoil Mechanisms

    DTIC Science & Technology

    1977-09-01

    deNemours & Co., Wilmington, DE . A chemical formula for this material is F This particular material is available in limited stocks and is no longer being...in a dry ice bath,, transported to the laboratory, connected to the gas buret and allowed to warm to room temperature. The gas volume was measured and...their flash points are very low. The MIEL -H-5606 fluid was included here for comparison with published bulk modulus data. Another material evaluated, the

  4. A Novel Multivoxel-Based Quantitation of Metabolites and Lipids Noninvasively Combined with Diffusion-Weighted Imaging in Breast Cancer

    DTIC Science & Technology

    2013-10-01

    cancer for improving the overall specificity.  Our recent work has focused on testing retrospective Maximum Entropy and Compressed Sensing of the 4D...terparts and increases the entropy or sparsity of the reconstructed spectrum by narrowing the peak linewidths and de -noising smaller features. This, in...tightened’ beyond the standard de - viation of the noise in an effort to reduce the RMSE and reconstruc- tion non-linearity, but this prevents the

  5. Application of the Convolution Formalism to the Ocean Tide Potential: Results from the Gravity and Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Desai, S. D.; Yuan, D. -N.

    2006-01-01

    A computationally efficient approach to reducing omission errors in ocean tide potential models is derived and evaluated using data from the Gravity Recovery and Climate Experiment (GRACE) mission. Ocean tide height models are usually explicitly available at a few frequencies, and a smooth unit response is assumed to infer the response across the tidal spectrum. The convolution formalism of Munk and Cartwright (1966) models this response function with a Fourier series. This allows the total ocean tide height, and therefore the total ocean tide potential, to be modeled as a weighted sum of past, present, and future values of the tide-generating potential. Previous applications of the convolution formalism have usually been limited to tide height models, but we extend it to ocean tide potential models. We use luni-solar ephemerides to derive the required tide-generating potential so that the complete spectrum of the ocean tide potential is efficiently represented. In contrast, the traditionally adopted harmonic model of the ocean tide potential requires the explicit sum of the contributions from individual tidal frequencies. It is therefore subject to omission errors from neglected frequencies and is computationally more intensive. Intersatellite range rate data from the GRACE mission are used to compare convolution and harmonic models of the ocean tide potential. The monthly range rate residual variance is smaller by 4-5%, and the daily residual variance is smaller by as much as 15% when using the convolution model than when using a harmonic model that is defined by twice the number of parameters.

  6. Compressed air blast injury with palpebral, orbital, facial, cervical, and mediastinal emphysema through an eyelid laceration: a case report and review of literature

    PubMed Central

    2013-01-01

    Background To the best of our knowledge, only 14 cases of orbital or periorbital compressed air injuries from air guns or hoses have been reported in the literature. Case presentation A 30-year-old man was accidentally injured when a compressed air hose nozzle hit his right eye. The right half of his face was markedly swollen and a skin laceration near the right medial canthus was identified. A computed tomography scan showed subcutaneous and intraorbital emphysema around the right eye as well as cervical and mediastinal emphysema. He was prophylactically treated with systemic and topical antibiotics to prevent infection. All emphysemas had completely resolved 2 weeks after the injury. Conclusions A review of all 15 cases (including ours) showed that all patients were male and that 6 of the 15 (40.0%) cases were related to industrial accidents. Although emphysema was restricted to the subconjunctival space in 2 (13.3%) cases, it spread to the orbit in the remaining 13 (86.7%) cases. Cervical and mediastinal emphysemas were found in 3 (20.0%) cases, and intracranial emphysema was confirmed in 6 (40.0%) cases. Prophylactic antibiotics were used in most cases and the prognosis was generally good in all but one patient, who developed optic atrophy and blindness. PMID:24195485

  7. Chest compressions in newborn animal models: A review.

    PubMed

    Solevåg, Anne Lee; Cheung, Po-Yin; Lie, Helene; O'Reilly, Megan; Aziz, Khalid; Nakstad, Britt; Schmölzer, Georg Marcus

    2015-11-01

    Much of the knowledge about the optimal way to perform chest compressions (CC) in newborn infants is derived from animal studies. The objective of this review was to identify studies of CC in newborn term animal models and review the evidence. We also provide an overview of the different models. MEDLINE, EMBASE and CINAHL, until September 29th 2014. Study eligibility criteria and interventions: term newborn animal models where CC was performed. Based on 419 retrieved studies from MEDLINE and 502 from EMBASE, 28 studies were included. No additional studies were identified in CINAHL. Most of the studies were performed in pigs after perinatal transition without long-term follow-up. The models differed widely in methodological aspects, which limits the possibility to compare and synthesize findings. Studies uncommonly reported the method for randomization and allocation concealment, and a limited number were blinded. Only the evidence in favour of the two-thumb encircling hands technique for performing CC, a CC to ventilation ratio of 3:1; and that air can be used for ventilation during CC; was supported by more than one study. Animal studies should be performed and reported with the same rigor as in human randomized trials. Good transitional and survival models are needed to further increase the strength of the evidence derived from animal studies of newborn chest compressions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Thromboprophylaxis using combined intermittent pneumatic compression and pharmacologic prophylaxis versus pharmacologic prophylaxis alone in critically ill patients: study protocol for a randomized controlled trial.

    PubMed

    Arabi, Yaseen M; Alsolamy, Sami; Al-Dawood, Abdulaziz; Al-Omari, Awad; Al-Hameed, Fahad; Burns, Karen E A; Almaani, Mohammed; Lababidi, Hani; Al Bshabshe, Ali; Mehta, Sangeeta; Al-Aithan, Abdulsalam M; Mandourah, Yasser; Almekhlafi, Ghaleb; Finfer, Simon; Abdukahil, Sheryl Ann I; Afesh, Lara Y; Dbsawy, Maamoun; Sadat, Musharaf

    2016-08-03

    Venous thromboembolism (VTE) remains a common problem in critically ill patients. Pharmacologic prophylaxis is currently the standard of care based on high-level evidence from randomized controlled trials. However, limited evidence exists regarding the effectiveness of intermittent pneumatic compression (IPC) devices. The Pneumatic compREssion for preventing VENous Thromboembolism (PREVENT trial) aims to determine whether the adjunct use of IPC with pharmacologic prophylaxis compared to pharmacologic prophylaxis alone in critically ill patients reduces the risk of VTE. The PREVENT trial is a multicenter randomized controlled trial, which will recruit 2000 critically ill patients from over 20 hospitals in three countries. The primary outcome is the incidence of proximal lower extremity deep vein thrombosis (DVT) within 28 days after randomization. Radiologists interpreting the scans are blinded to intervention allocation, whereas the patients and caregivers are unblinded. The trial has 80 % power to detect a 3 % absolute risk reduction in proximal DVT from 7 to 4 %. The first patient was enrolled in July 2014. As of May 2015, a total of 650 patients have been enrolled from 13 centers in Saudi Arabia, Canada and Australia. The first interim analysis is anticipated in July 2016. We expect to complete recruitment by 2018. Clinicaltrials.gov: NCT02040103 (registered on 3 November 2013). Current controlled trials: ISRCTN44653506 (registered on 30 October 2013).

  9. Diuretic versus placebo in normotensive acute pulmonary embolism with right ventricular enlargement and injury: a double-blind randomised placebo controlled study. Protocol of the DiPER study.

    PubMed

    Gallet, Romain; Meyer, Guy; Ternacle, Julien; Biendel, Caroline; Brunet, Anne; Meneveau, Nicolas; Rosario, Roger; Couturaud, Francis; Sebbane, Mustapha; Lamblin, Nicolas; Bouvaist, Helene; Coste, Pierre; Maitre, Bernard; Bastuji-Garin, Sylvie; Dubois-Rande, Jean-Luc; Lim, Pascal

    2015-05-22

    In acute pulmonary embolism (PE), poor outcome is usually related to right ventricular (RV) failure due to the increase in RV afterload. Treatment of PE with RV failure without shock is controversial and usually relies on fluid expansion to increase RV preload. However, several studies suggest that fluid expansion may worsen acute RV failure by increasing RV dilation and ischaemia, and increase left ventricular compression by RV dilation. By reducing RV enlargement, diuretic treatment may break this vicious circle and provide early improvement in normotensive patients referred for acute PE with RV failure. The Diuretic versus placebo in Pulmonary Embolism with Right ventricular enlargement trial (DiPER) is a prospective, multicentre, randomised (1:1), double-blind, placebo controlled study assessing the superiority of furosemide as compared with placebo in normotensive patients with confirmed acute PE and RV dilation (diagnosed on echocardiography or CT of the chest) and positive brain natriuretic peptide result. The primary end point will be a combined clinical criterion derived from simplified Pulmonary Embolism Severity Index (PESI) score and evaluated at 24 h. It will include: (1) urine output >0.5 mL/kg/min for the past 24 h; (2) heart rate <110 bpm; (3) systolic blood pressure >100 mm Hg and (4) arterial oxyhaemoglobin level >90%. Thirty-day major cardiac events defined as death, cardiac arrest, mechanical ventilation, need for catecholamine and thrombolysis, will be evaluated as a secondary end point. Assuming an increase of 30% in the primary end point with furosemide and a β risk of 10%, 270 patients will be required. Ethical approval was received from the ethical committee of Ile de France (2014-001090-14). The findings of the trial will be disseminated through peer-reviewed journals, and national and international conference presentations. NCT02268903. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. The VLSI design of an error-trellis syndrome decoder for certain convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Hsu, I.-S.; Truong, T. K.

    1986-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  11. System Design for FEC in Aeronautical Telemetry

    DTIC Science & Technology

    2012-03-12

    rate punctured convolutional codes for soft decision Viterbi...below follows that given in [8]. The final coding rate of exactly 2/3 is achieved by puncturing the rate -1/2 code as follows. We begin with the buffer c1...concatenated convolutional code (SCCC). The contributions of this paper are on the system-design level. One major contribution is to design a SCCC code

  12. Convolutional coding results for the MVM '73 X-band telemetry experiment

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1978-01-01

    Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.

  13. The VLSI design of error-trellis syndrome decoding for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Jensen, J. M.; Truong, T. K.; Hsu, I. S.

    1985-01-01

    A recursive algorithm using the error-trellis decoding technique is developed to decode convolutional codes (CCs). An example, illustrating the very large scale integration (VLSI) architecture of such a decode, is given for a dual-K CC. It is demonstrated that such a decoder can be realized readily on a single chip with metal-nitride-oxide-semiconductor technology.

  14. A deep learning method for early screening of lung cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Kunpeng; Jiang, Huiqin; Ma, Ling; Gao, Jianbo; Yang, Xiaopeng

    2018-04-01

    Lung cancer is the leading cause of cancer-related deaths among men. In this paper, we propose a pulmonary nodule detection method for early screening of lung cancer based on the improved AlexNet model. In order to maintain the same image quality as the existing B/S architecture PACS system, we convert the original CT image into JPEG format image by analyzing the DICOM file firstly. Secondly, in view of the large size and complex background of CT chest images, we design the convolution neural network on basis of AlexNet model and sparse convolution structure. At last we train our models on the software named DIGITS which is provided by NVIDIA. The main contribution of this paper is to apply the convolutional neural network for the early screening of lung cancer and improve the screening accuracy by combining the AlexNet model with the sparse convolution structure. We make a series of experiments on the chest CT images using the proposed method, of which the sensitivity and specificity indicates that the method presented in this paper can effectively improve the accuracy of early screening of lung cancer and it has certain clinical significance at the same time.

  15. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  16. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.

    PubMed

    Li, Siqi; Jiang, Huiyan; Pang, Wenbo

    2017-05-01

    Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Chinese character recognition based on Gabor feature extraction and CNN

    NASA Astrophysics Data System (ADS)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  18. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  19. Defect detection and classification of galvanized stamping parts based on fully convolution neural network

    NASA Astrophysics Data System (ADS)

    Xiao, Zhitao; Leng, Yanyi; Geng, Lei; Xi, Jiangtao

    2018-04-01

    In this paper, a new convolution neural network method is proposed for the inspection and classification of galvanized stamping parts. Firstly, all workpieces are divided into normal and defective by image processing, and then the defective workpieces extracted from the region of interest (ROI) area are input to the trained fully convolutional networks (FCN). The network utilizes an end-to-end and pixel-to-pixel training convolution network that is currently the most advanced technology in semantic segmentation, predicts result of each pixel. Secondly, we mark the different pixel values of the workpiece, defect and background for the training image, and use the pixel value and the number of pixels to realize the recognition of the defects of the output picture. Finally, the defect area's threshold depended on the needs of the project is set to achieve the specific classification of the workpiece. The experiment results show that the proposed method can successfully achieve defect detection and classification of galvanized stamping parts under ordinary camera and illumination conditions, and its accuracy can reach 99.6%. Moreover, it overcomes the problem of complex image preprocessing and difficult feature extraction and performs better adaptability.

  20. Traffic sign recognition based on deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

Top