Sample records for gain compression factor

  1. Gain compression and its dependence on output power in quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Zhukov, A. E.; Maximov, M. V.; Savelyev, A. V.; Shernyakov, Yu. M.; Zubov, F. I.; Korenev, V. V.; Martinez, A.; Ramdane, A.; Provost, J.-G.; Livshits, D. A.

    2013-06-01

    The gain compression coefficient was evaluated by applying the frequency modulation/amplitude modulation technique in a distributed feedback InAs/InGaAs quantum dot laser. A strong dependence of the gain compression coefficient on the output power was found. Our analysis of the gain compression within the frame of the modified well-barrier hole burning model reveals that the gain compression coefficient decreases beyond the lasing threshold, which is in a good agreement with the experimental observations.

  2. Effects of bandwidth, compression speed, and gain at high frequencies on preferences for amplified music.

    PubMed

    Moore, Brian C J

    2012-09-01

    This article reviews a series of studies on the factors influencing sound quality preferences, mostly for jazz and classical music stimuli. The data were obtained using ratings of individual stimuli or using the method of paired comparisons. For normal-hearing participants, the highest ratings of sound quality were obtained when the reproduction bandwidth was wide (55 to 16000 Hz) and ripples in the frequency response were small (less than ± 5 dB). For hearing-impaired participants listening via a simulated five-channel compression hearing aid with gains set using the CAM2 fitting method, preferences for upper cutoff frequency varied across participants: Some preferred a 7.5- or 10-kHz upper cutoff frequency over a 5-kHz cutoff frequency, and some showed the opposite preference. Preferences for a higher upper cutoff frequency were associated with a shallow high-frequency slope of the audiogram. A subsequent study comparing the CAM2 and NAL-NL2 fitting methods, with gains slightly reduced for participants who were not experienced hearing aid users, showed a consistent preference for CAM2. Since the two methods differ mainly in the gain applied for frequencies above 4 kHz (CAM2 recommending higher gain than NAL-NL2), these results suggest that extending the upper cutoff frequency is beneficial. A system for reducing "overshoot" effects produced by compression gave small but significant benefits for sound quality of a percussion instrument (xylophone). For a high-input level (80 dB SPL), slow compression was preferred over fast compression.

  3. The Fusion Gain Analysis of the Inductively Driven Liner Compression Based Fusion

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John

    2016-10-01

    An analytical analysis of the fusion gain expected in the inductively driven liner compression (IDLC) based fusion is conducted to identify the fusion gain scaling at various operating conditions. The fusion based on the IDLC is a magneto-inertial fusion concept, where a Field-Reversed Configuration (FRC) plasmoid is compressed via the inductively-driven metal liner to drive the FRC to fusion conditions. In the past, an approximate scaling law for the expected fusion gain for the IDLC based fusion was obtained under the key assumptions of (1) D-T fuel at 5-40 keV, (2) adiabatic scaling laws for the FRC dynamics, (3) FRC energy dominated by the pressure balance with the edge magnetic field at the peak compression, and (4) the liner dwell time being liner final diameter divided by the peak liner velocity. In this study, various assumptions made in the previous derivation is relaxed to study the change in the fusion gain scaling from the previous result of G ml1 / 2 El11 / 8 , where ml is the liner mass and El is the peak liner kinetic energy. The implication from the modified fusion gain scaling on the performance of the IDLC fusion reactor system is also explored.

  4. Effect of Human Auditory Efferent Feedback on Cochlear Gain and Compression

    PubMed Central

    Drga, Vit; Plack, Christopher J.

    2014-01-01

    The mammalian auditory system includes a brainstem-mediated efferent pathway from the superior olivary complex by way of the medial olivocochlear system, which reduces the cochlear response to sound (Warr and Guinan, 1979; Liberman et al., 1996). The human medial olivocochlear response has an onset delay of between 25 and 40 ms and rise and decay constants in the region of 280 and 160 ms, respectively (Backus and Guinan, 2006). Physiological studies with nonhuman mammals indicate that onset and decay characteristics of efferent activation are dependent on the temporal and level characteristics of the auditory stimulus (Bacon and Smith, 1991; Guinan and Stankovic, 1996). This study uses a novel psychoacoustical masking technique using a precursor sound to obtain a measure of the efferent effect in humans. This technique avoids confounds currently associated with other psychoacoustical measures. Both temporal and level dependency of the efferent effect was measured, providing a comprehensive measure of the effect of human auditory efferents on cochlear gain and compression. Results indicate that a precursor (>20 dB SPL) induced efferent activation, resulting in a decrease in both maximum gain and maximum compression, with linearization of the compressive function for input sound levels between 50 and 70 dB SPL. Estimated gain decreased as precursor level increased, and increased as the silent interval between the precursor and combined masker-signal stimulus increased, consistent with a decay of the efferent effect. Human auditory efferent activation linearizes the cochlear response for mid-level sounds while reducing maximum gain. PMID:25392499

  5. Shock ignition targets: gain and robustness vs ignition threshold factor

    NASA Astrophysics Data System (ADS)

    Atzeni, Stefano; Antonelli, Luca; Schiavi, Angelo; Picone, Silvia; Volponi, Gian Marco; Marocchino, Alberto

    2017-10-01

    Shock ignition is a laser direct-drive inertial confinement fusion scheme, in which the stages of compression and hot spot formation are partly separated. The hot spot is created at the end of the implosion by a converging shock driven by a final ``spike'' of the laser pulse. Several shock-ignition target concepts have been proposed and relevant gain curves computed (see, e.g.). Here, we consider both pure-DT targets and more facility-relevant targets with plastic ablator. The investigation is conducted with 1D and 2D hydrodynamic simulations. We determine ignition threshold factors ITF's (and their dependence on laser pulse parameters) by means of 1D simulations. 2D simulations indicate that robustness to long-scale perturbations increases with ITF. Gain curves (gain vs laser energy), for different ITF's, are generated using 1D simulations. Work partially supported by Sapienza Project C26A15YTMA, Sapienza 2016 (n. 257584), Eurofusion Project AWP17-ENR-IFE-CEA-01.

  6. Pressure and compressibility factor of bidisperse magnetic fluids

    NASA Astrophysics Data System (ADS)

    Minina, Elena S.; Blaak, Ronald; Kantorovich, Sofia S.

    2018-04-01

    In this work, we investigate the pressure and compressibility factors of bidisperse magnetic fluids with relatively weak dipolar interactions and different granulometric compositions. In order to study these properties, we employ the method of diagram expansion, taking into account two possible scenarios: (1) dipolar particles repel each other as hard spheres; (2) the polymer shell on the surface of the particles is modelled through a soft-sphere approximation. The theoretical predictions of the pressure and compressibility factors of bidisperse ferrofluids at different granulometric compositions are supported by data obtained by means of molecular dynamics computer simulations, which we also carried out for these systems. Both theory and simulations reveal that the pressure and compressibility factors decrease with growing dipolar correlations in the system, namely with an increasing fraction of large particles. We also demonstrate that even if dipolar interactions are too weak for any self-assembly to take place, the interparticle correlations lead to a qualitative change in the behaviour of the compressibility factors when compared to that of non-dipolar spheres, making the dependence monotonic.

  7. Laser pulse self-compression in an active fibre with a finite gain bandwidth under conditions of a nonstationary nonlinear response

    NASA Astrophysics Data System (ADS)

    Balakin, A. A.; Litvak, A. G.; Mironov, V. A.; Skobelev, S. A.

    2018-04-01

    We study the influence of a nonstationary nonlinear response of a medium on self-compression of soliton-like laser pulses in active fibres with a finite gain bandwidth. Based on the variational approach, we qualitatively analyse the self-action of the wave packet in the system under consideration in order to classify the main evolution regimes and to determine the minimum achievable laser pulse duration during self-compression. The existence of stable soliton-type structures is shown in the framework of the parabolic approximation of the gain profile (in the approximation of the Gnizburg – Landau equation). An analysis of the self-action of laser pulses in the framework of the nonlinear Schrödinger equation with a sign-constant gain profile demonstrate a qualitative change in the dynamics of the wave field in the case of a nonsta­tionary nonlinear response that shifts the laser pulse spectrum from the amplification region and stops the pulse compression. Expressions for a minimum duration of a soliton-like laser pulse are obtained as a function of the problem parameters, which are in good agreement with the results of numerical simulation.

  8. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    NASA Astrophysics Data System (ADS)

    Herzke, Tobias; Hohmann, Volker

    2005-12-01

    The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test) showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test) showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase in intelligibility

  9. Astigmatism and spontaneous emission factor of laser diodes with parabolic gain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamine, T.

    1983-04-01

    An explicit relation between the astigmatism and the spontaneous emission factor of gain guiding lasers has been derived with the assumption that the gain profile can be approximated to be a parabola or that the lowest order mode in the cavity is approximately Gaussian. The maximum value of the spontaneous emission factor is shown to be ..sqrt..2 if index guiding is dominant. Beyond K = ..sqrt..2, where gain guiding is dominant in this region, the astigmatism decreases with the spontaneous emission factor. It is also shown that the spontaneous emission factor of the gain guiding lasers does not much exceedmore » ten and this conclusion has been confirmed experimentally for those whose stripe widths are larger than 4 ..mu..m.« less

  10. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  12. Factors affecting patient compliance with compressive brace therapy for pectus carinatum.

    PubMed

    Kang, Du-Young; Jung, Junho; Chung, Sangho; Cho, Jinkyung; Lee, Sungsoo

    2014-12-01

    The aim of this study was to identify factors affecting patient compliance with brace therapy for pectus carinatum. Eighty-six pectus carinatum patients who started brace therapy from August 2008 to November 2011 were included in this study. Patients were divided into two groups: patients who wore the brace for ≥6 months (compliance group) or patients who wore the brace for <6 months (non-compliance group). Factors affecting patient compliance were assessed at the last day of follow-up with a multiple-choice questionnaire. The questionnaire comprised seven items: pain at compression site, skin problems on compression area, confidence in brace treatment, shame, discomfort, initial result of bracing treatment and total number of factors affecting patient compliance. Eighty-six patients completed the survey, including seven (8.1%) female patients and 79 (91.9%) male patients, with a mean age of 12.0 years at the time of treatment (range, 3-20 years). The initial result of the compression period (P <0.001) and total number of factors affecting patient compliance (P <0.05) were significant predictors of patient compliance. An initial successful result of the compression period may increase patient compliance during treatment for pectus carinatum. Additional efforts to decrease pain, skin problems, shame and discomfort, and to give confidence may be beneficial in increasing compliance with bracing treatment. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  13. Soliton compression to few-cycle pulses with a high quality factor by engineering cascaded quadratic nonlinearities.

    PubMed

    Zeng, Xianglong; Guo, Hairun; Zhou, Binbin; Bache, Morten

    2012-11-19

    We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency. Numerical results show that compressed pulses with less than three-cycle duration can be achieved even when the compression factor is very large, and in contrast to standard soliton compression, these compressed pulses have minimal pedestal and high quality factor.

  14. High-Gain High-Field Fusion Plasma

    PubMed Central

    Li, Ge

    2015-01-01

    A Faraday wheel (FW)—an electric generator of constant electrical polarity that produces huge currents—could be implemented in an existing tokamak to study high-gain high-field (HGHF) fusion plasma, such as the Experimental Advanced Superconducting Tokamak (EAST). HGHF plasma can be realized in EAST by updating its pulsed-power system to compress plasma in two steps by induction fields; high gains of the Lawson trinity parameter and fusion power are both predicted by formulating the HGHF plasma. Both gain rates are faster than the decrease rate of the plasma volume. The formulation is checked by earlier ATC tests. Good agreement between theory and tests indicates that scaling to over 10 T at EAST may be possible by two-step compressions with a compression ratio of the minor radius of up to 3. These results point to a quick new path of fusion plasma study, i.e., simulating the Sun by EAST. PMID:26507314

  15. RF pulse compression for future linear colliders

    NASA Astrophysics Data System (ADS)

    Wilson, Perry B.

    1995-07-01

    Future (nonsuperconducting) linear colliders will require very high values of peak rf power per meter of accelerating structure. The role of rf pulse compression in producing this power is examined within the context of overall rf system design for three future colliders at energies of 1.0-1.5 TeV, 5 TeV, and 25 TeV. In order to keep the average AC input power and the length of the accelerator within reasonable limits, a collider in the 1.0-1.5 TeV energy range will probably be built at an x-band rf frequency, and will require a peak power on the order of 150-200 MW per meter of accelerating structure. A 5 TeV collider at 34 GHz with a reasonable length (35 km) and AC input power (225 MW) would require about 550 MW per meter of structure. Two-beam accelerators can achieve peak powers of this order by applying dc pulse compression techniques (induction linac modules) to produce the drive beam. Klystron-driven colliders achieve high peak power by a combination of dc pulse compression (modulators) and rf pulse compression, with about the same overall rf system efficiency (30-40%) as a two-beam collider. A high gain (6.8) three-stage binary pulse compression system with high efficiency (80%) is described, which (compared to a SLED-II system) can be used to reduce the klystron peak power by about a factor of two, or alternatively, to cut the number of klystrons in half for a 1.0-1.5 TeV x-band collider. For a 5 TeV klystron-driven collider, a high gain, high efficiency rf pulse compression system is essential.

  16. Method for increasing the rate of compressive strength gain in hardenable mixtures containing fly ash

    DOEpatents

    Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.

    1997-01-01

    The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention provides a method for increasing the rate of strength gain of a hardenable mixture containing fly ash by exposing the fly ash to an aqueous slurry of calcium oxide (lime) prior to its incorporation into the hardenable mixture. The invention further relates to such hardenable mixtures, e.g., concrete and mortar, that contain fly ash pre-reacted with calcium oxide. In particular, the fly ash is added to a slurry of calcium oxide in water, prior to incorporating the fly ash in a hardenable mixture. The hardenable mixture may be concrete or mortar. In a specific embodiment, mortar containing fly ash treated by exposure to an aqueous lime slurry are prepared and tested for compressive strength at early time points.

  17. Compression of electromyographic signals using image compression techniques.

    PubMed

    Costa, Marcus Vinícius Chaffim; Berger, Pedro de Azevedo; da Rocha, Adson Ferreira; de Carvalho, João Luiz Azevedo; Nascimento, Francisco Assis de Oliveira

    2008-01-01

    Despite the growing interest in the transmission and storage of electromyographic signals for long periods of time, few studies have addressed the compression of such signals. In this article we present an algorithm for compression of electromyographic signals based on the JPEG2000 coding system. Although the JPEG2000 codec was originally designed for compression of still images, we show that it can also be used to compress EMG signals for both isotonic and isometric contractions. For EMG signals acquired during isometric contractions, the proposed algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.75% to 13.7%. For isotonic EMG signals, the algorithm provided compression factors ranging from 75 to 90%, with an average PRD ranging from 3.4% to 7%. The compression results using the JPEG2000 algorithm were compared to those using other algorithms based on the wavelet transform.

  18. Theoretical Assessment of Compressibility Factor of Gases by Using Second Virial Coefficient

    NASA Astrophysics Data System (ADS)

    Mamedov, Bahtiyar A.; Somuncu, Elif; Askerov, Iskender M.

    2018-01-01

    We present a new analytical approximation for determining the compressibility factor of real gases at various temperature values. This algorithm is suitable for the accurate evaluation of the compressibility factor using the second virial coefficient with a Lennard-Jones (12-6) potential. Numerical examples are presented for the gases H2, N2, He, CO2, CH4 and air, and the results are compared with other studies in the literature. Our results showed good agreement with the data in the literature. The consistency of the results demonstrates the effectiveness of our analytical approximation for real gases.

  19. Method for increasing the rate of compressive strength gain in hardenable mixtures containing fly ash

    DOEpatents

    Liskowitz, J.W.; Wecharatana, M.; Jaturapitakkul, C.; Cerkanowicz, A.E.

    1997-10-28

    The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention provides a method for increasing the rate of strength gain of a hardenable mixture containing fly ash by exposing the fly ash to an aqueous slurry of calcium oxide (lime) prior to its incorporation into the hardenable mixture. The invention further relates to such hardenable mixtures, e.g., concrete and mortar, that contain fly ash pre-reacted with calcium oxide. In particular, the fly ash is added to a slurry of calcium oxide in water, prior to incorporating the fly ash in a hardenable mixture. The hardenable mixture may be concrete or mortar. In a specific embodiment, mortar containing fly ash treated by exposure to an aqueous lime slurry are prepared and tested for compressive strength at early time points. 2 figs.

  20. Compressive sensing in medical imaging

    PubMed Central

    Graff, Christian G.; Sidky, Emil Y.

    2015-01-01

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400

  1. Effect of compressive force on PEM fuel cell performance

    NASA Astrophysics Data System (ADS)

    MacDonald, Colin Stephen

    question and the performance gains from the aforementioned compression factors were quantified. The study provided a considerable amount of practical and analytical knowledge in the area of cell compression and shed light on the importance of precision compressive control within the PEM fuel cell.

  2. Compressive strain induced enhancement in thermoelectric-power-factor in monolayer MoS2 nanosheet

    NASA Astrophysics Data System (ADS)

    Dimple; Jena, Nityasagar; De Sarkar, Abir

    2017-06-01

    Strain and temperature induced tunability in the thermoelectric properties in monolayer MoS2 (ML-MoS2) has been demonstrated using density functional theory coupled to semi-classical Boltzmann transport theory. Compressive strain, in general and uniaxial compressive strain (along the zig-zag direction), in particular, is found to be most effective in enhancing the thermoelectric power factor, owing to the higher electronic mobility and its sensitivity to lattice compression along this direction. Variation in the Seebeck coefficient and electronic band gap with strain is found to follow the Goldsmid-Sharp relation. n-type doping is found to raise the relaxation time-scaled thermoelectric power factor higher than p-type doping and this divide widens with increasing temperature. The relaxation time-scaled thermoelectric power factor in optimally n-doped ML-MoS2 is found to undergo maximal enhancement under the application of 3% uniaxial compressive strain along the zig-zag direction, when both the (direct) electronic band gap and the Seebeck coefficient reach their maximum, while the electron mobility drops down drastically from 73.08 to 44.15 cm2 V-1 s-1. Such strain sensitive thermoelectric responses in ML-MoS2 could open doorways for a variety of applications in emerging areas in 2D-thermoelectrics, such as on-chip thermoelectric power generation and waste thermal energy harvesting.

  3. High-quality lossy compression: current and future trends

    NASA Astrophysics Data System (ADS)

    McLaughlin, Steven W.

    1995-01-01

    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  4. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    PubMed

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  5. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  6. The spontaneous emission factor for lasers with gain induced waveguiding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newstein, M.

    1984-11-01

    The expression for the spontaneous emission factor for lasers with gain induced waveguiding has a factor K, called by Petermann ''the astigmatism parameter.'' This factor has been invoked to explain spectral and dynamic characteristics of this class of lasers. We contend that the widely accepted form of the K factor is based on a derivation which is not appropriate for the typical laser situation where the spontaneous emission factor is much smaller than unity. An alternative derivation is presented which leads to a different form for the K factor. The new expression predicts much smaller values under conditions where themore » previous theory gave values large compared to unity. Petermann's form for the K factor is shown to be relevant to large gain linear amplifiers where the power is amplified spontaneous emission noise. The expression for the power output has Petermann's value of K as a factor. The difference in the two situations is that in the laser oscillator the typical atom of interest couples a small portion of its incoherent spontaneous emission into the dominant mode, whereas in the amplifier only the atoms at the input end are important as sources and their output is converted to a greater degree into the dominant mode through the propagation process. In this analysis the authors use a classical model of radiating point dipoles in a continuous medium characterized by a complex permittivity. Since uncritical use of this model will lead to infinite radiation resistance they address the problem of its self-consistency.« less

  7. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  8. Dynamics of cochlear nonlinearity: Automatic gain control or instantaneous damping?

    PubMed

    Altoè, Alessandro; Charaziak, Karolina K; Shera, Christopher A

    2017-12-01

    Measurements of basilar-membrane (BM) motion show that the compressive nonlinearity of cochlear mechanical responses is not an instantaneous phenomenon. For this reason, the cochlear amplifier has been thought to incorporate an automatic gain control (AGC) mechanism characterized by a finite reaction time. This paper studies the effect of instantaneous nonlinear damping on the responses of oscillatory systems. The principal results are that (i) instantaneous nonlinear damping produces a noninstantaneous gain control that differs markedly from typical AGC strategies; (ii) the kinetics of compressive nonlinearity implied by the finite reaction time of an AGC system appear inconsistent with the nonlinear dynamics measured on the gerbil basilar membrane; and (iii) conversely, those nonlinear dynamics can be reproduced using an harmonic oscillator with instantaneous nonlinear damping. Furthermore, existing cochlear models that include instantaneous gain-control mechanisms capture the principal kinetics of BM nonlinearity. Thus, an AGC system with finite reaction time appears neither necessary nor sufficient to explain nonlinear gain control in the cochlea.

  9. Moving beyond the illness: factors contributing to gaining and maintaining employment.

    PubMed

    Cunningham, K; Wolbert, R; Brockmeier, M B

    2000-08-01

    The work presented here, exploratory in nature, uses a comparative and qualitative approach to understand the factors associated with the ability of individuals with severe and persistent mental illness to successfully gain and maintain employment. Based on open-ended interviews with individuals in an Assertive Community Treatment (ACT) program, we compare the experiences of those who have been successful gaining and maintaining employment, with those who have been successful gaining but not maintaining work, and those who have been unsuccessful gaining employment. The three groups seemed to differ in three significant ways: (1) in the ways the individuals talked about their illness, (2) in the ways the individuals talked about work, and (3) in the strategies they described for coping with bad days. In each of these areas individuals' awareness of and attitude toward their illness was significant. The findings have clear implications for agencies working to help people with severe and persistent mental illness obtain and maintain employment.

  10. Measurement of the Compressibility Factor of Gases: A Physical Chemistry Laboratory Experiment

    ERIC Educational Resources Information Center

    Varberg, Thomas D.; Bendelsmith, Andrew J.; Kuwata, Keith T.

    2011-01-01

    In this article, we describe an experiment for the undergraduate physical chemistry laboratory in which students measure the compressibility factor of two gases, helium and carbon dioxide, as a function of pressure at constant temperature. The experimental apparatus is relatively inexpensive to construct and is described and diagrammed in detail.…

  11. A design approach for systems based on magnetic pulse compression.

    PubMed

    Kumar, D Durga Praveen; Mitra, S; Senthil, K; Sharma, D K; Rajan, Rehim N; Sharma, Archana; Nagesh, K V; Chakravarthy, D P

    2008-04-01

    A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results.

  12. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  13. Contrast Gain Control in Auditory Cortex

    PubMed Central

    Rabinowitz, Neil C.; Willmore, Ben D.B.; Schnupp, Jan W.H.; King, Andrew J.

    2011-01-01

    Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neuron's preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds. PMID:21689603

  14. Determination of preferred parameters for multichannel compression using individually fitted simulated hearing AIDS and paired comparisons.

    PubMed

    Moore, Brian C J; Füllgrabe, Christian; Stone, Michael A

    2011-01-01

    To determine preferred parameters of multichannel compression using individually fitted simulated hearing aids and a method of paired comparisons. Fourteen participants with mild to moderate hearing loss listened via a simulated five-channel compression hearing aid fitted using the CAMEQ2-HF method to pairs of speech sounds (a male talker and a female talker) and musical sounds (a percussion instrument, orchestral classical music, and a jazz trio) presented sequentially and indicated which sound of the pair was preferred and by how much. The sounds in each pair were derived from the same token and differed along a single dimension in the type of processing applied. For the speech sounds, participants judged either pleasantness or clarity; in the latter case, the speech was presented in noise at a 2-dB signal-to-noise ratio. For musical sounds, they judged pleasantness. The parameters explored were time delay of the audio signal relative to the gain control signal (the alignment delay), compression speed (attack and release times), bandwidth (5, 7.5, or 10 kHz), and gain at high frequencies relative to that prescribed by CAMEQ2-HF. Pleasantness increased with increasing alignment delay only for the percussive musical sound. Clarity was not affected by alignment delay. There was a trend for pleasantness to decrease slightly with increasing bandwidth, but this was significant only for female speech with fast compression. Judged clarity was significantly higher for the 7.5- and 10-kHz bandwidths than for the 5-kHz bandwidth for both slow and fast compression and for both talker genders. Compression speed had little effect on pleasantness for 50- or 65-dB SPL input levels, but slow compression was generally judged as slightly more pleasant than fast compression for an 80-dB SPL input level. Clarity was higher for slow than for fast compression for input levels of 80 and 65 dB SPL but not for a level of 50 dB SPL. Preferences for pleasantness were approximately equal

  15. Flynn effects on sub-factors of episodic and semantic memory: parallel gains over time and the same set of determining factors.

    PubMed

    Rönnlund, Michael; Nilsson, Lars-Göran

    2009-09-01

    The study examined the extent to which time-related gains in cognitive performance, so-called Flynn effects, generalize across sub-factors of episodic memory (recall and recognition) and semantic memory (knowledge and fluency). We conducted time-sequential analyses of data drawn from the Betula prospective cohort study, involving four age-matched samples (35-80 years; N=2996) tested on the same battery of memory tasks on either of four occasions (1989, 1995, 1999, and 2004). The results demonstrate substantial time-related improvements on recall and recognition as well as on fluency and knowledge, with a trend of larger gains on semantic as compared with episodic memory [Rönnlund, M., & Nilsson, L. -G. (2008). The magnitude, generality, and determinants of Flynn effects on forms of declarative memory: Time-sequential analyses of data from a Swedish cohort study. Intelligence], but highly similar gains across the sub-factors. Finally, the association with markers of environmental change was similar, with evidence that historical increases in quantity of schooling was a main driving force behind the gains, both on the episodic and semantic sub-factors. The results obtained are discussed in terms of brain regions involved.

  16. A conceptual model of psychosocial risk and protective factors for excessive gestational weight gain.

    PubMed

    Hill, Briony; Skouteris, Helen; McCabe, Marita; Milgrom, Jeannette; Kent, Bridie; Herring, Sharon J; Hartley-Clark, Linda; Gale, Janette

    2013-02-01

    nearly half of all women exceed the guideline recommended pregnancy weight gain for their Body Mass Index (BMI) category. Excessive gestational weight gain (GWG) is correlated positively with postpartum weight retention and is a predictor of long-term, higher BMI in mothers and their children. Psychosocial factors are generally not targeted in GWG behaviour change interventions, however, multifactorial, conceptual models that include these factors, may be useful in determining the pathways that contribute to excessive GWG. We propose a conceptual model, underpinned by health behaviour change theory, which outlines the psychosocial determinants of GWG, including the role of motivation and self-efficacy towards healthy behaviours. This model is based on a review of the existing literature in this area. there is increasing evidence to show that psychosocial factors, such as increased depressive symptoms, anxiety, lower self-esteem and body image dissatisfaction, are associated with excessive GWG. What is less known is how these factors might lead to excessive GWG. Our conceptual model proposes a pathway of factors that affect GWG, and may be useful for understanding the mechanisms by which interventions impact on weight management during pregnancy. This involves tracking the relationships among maternal psychosocial factors, including body image concerns, motivation to adopt healthy lifestyle behaviours, confidence in adopting healthy lifestyle behaviours for the purposes of weight management, and actual behaviour changes. health-care providers may improve weight gain outcomes in pregnancy if they assess and address psychosocial factors in pregnancy. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Measurement of Compression Factor and Error Sensitivity Factor of the Modified READ Facsimile Coding Technique.

    DTIC Science & Technology

    1980-08-01

    Compression factor and error sensitivity together with statistical data have also been tabulated. This TIB is a companion drcument to NCS TIB’s 79-7...vu donner la priorit6 pour lour r~alisation. Chaque application est conf ice A un " chef do projet", responsable successivoment do sa conception. de son...pilote depend des r~sultats obtenus et fait I’objet d’une d~cision- de ’.a Direction Gdnerale. Ndanmoins, le chef do projet doit dOs le d~part consid~rer

  18. Compression for radiological images

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  19. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  20. Gain curves and hydrodynamic modeling for shock ignition

    NASA Astrophysics Data System (ADS)

    Lafon, M.; Ribeyre, X.; Schurtz, G.

    2010-05-01

    Ignition of a precompressed thermonuclear fuel by means of a converging shock is now considered as a credible scheme to obtain high gains for inertial fusion energy. This work aims at modeling the successive stages of the fuel time history, from compression to final thermonuclear combustion, in order to provide the gain curves of shock ignition (SI). The leading physical mechanism at work in SI is pressure amplification, at first by spherical convergence, and by collision with the shock reflected at center during the stagnation process. These two effects are analyzed, and ignition conditions are provided as functions of the shock pressure and implosion velocity. Ignition conditions are obtained from a non-isobaric fuel assembly, for which we present a gain model. The corresponding gain curves exhibit a significantly lower ignition threshold and higher target gains than conventional central ignition.

  1. Raman Spectroscopy of Rdx Single Crystals Under Static Compression

    NASA Astrophysics Data System (ADS)

    Dreger, Zbigniew A.; Gupta, Yogendra M.

    2007-12-01

    To gain insight into the high pressure response of energetic crystal of RDX, Raman measurements were performed under hydrostatic compression up to 15 GPa. Several distinct changes in the spectra were found at 4.0±0.3 GPa, confirming the α-γ phase transition previously observed in polycrystalline samples. Symmetry correlation analyses indicate that the γ-polymorph may assume a space group isomorphous with a point group D2h with eight molecules occupying the C1 symmetry sites, similar to the α-phase. It is proposed that factor group coupling can account for the observed increase in the number of modes in the γ-phase.

  2. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  3. Video Compression Study: h.265 vs h.264

    NASA Technical Reports Server (NTRS)

    Pryor, Jonathan

    2016-01-01

    H.265 video compression (also known as High Efficiency Video Encoding (HEVC)) promises to provide double the video quality at half the bandwidth, or the same quality at half the bandwidth of h.264 video compression [1]. This study uses a Tektronix PQA500 to determine the video quality gains by using h.265 encoding. This study also compares two video encoders to see how different implementations of h.264 and h.265 impact video quality at various bandwidths.

  4. Factors Influencing Student Gains from Undergraduate Research Experiences at a Hispanic-Serving Institution

    PubMed Central

    Daniels, Heather; Grineski, Sara E.; Collins, Timothy W.; Morales, Danielle X.; Morera, Osvaldo; Echegoyen, Lourdes

    2016-01-01

    Undergraduate research experiences (UREs) confer many benefits to students, including improved self-confidence, better communication skills, and an increased likelihood of pursuing science careers. Additionally, UREs may be particularly important for racial/ethnic minority students who are underrepresented in the science workforce. We examined factors hypothetically relevant to underrepresented minority student gains from UREs at a Hispanic-serving institution, such as mentoring quality, family income, being Latino/a, and caring for dependents. Data came from a 2013 survey of University of Texas at El Paso students engaged in 10 URE programs (n = 227). Using generalized linear models (GzLMs) and adjusting for known covariates, we found that students who reported receiving higher-quality mentorship, spending more hours caring for dependents, and receiving more programmatic resources experienced significantly greater gains from their URE in all three areas we examined (i.e., thinking and working like a scientist, personal gains, and gains in skills). In two of three areas, duration of the URE was positive and significant. Being Latino/a was positive and significant only in the model predicting personal gains. Across the three models, quality of mentorship was the most important correlate of gains. This suggests that providing training to faculty mentors involved in UREs may improve student outcomes and increase program efficacy. PMID:27521234

  5. Application of content-based image compression to telepathology

    NASA Astrophysics Data System (ADS)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  6. Risk Factors Related to Lower Limb Edema, Compression, and Physical Activity During Pregnancy: A Retrospective Study.

    PubMed

    Ochalek, Katarzyna; Pacyga, Katarzyna; Curyło, Marta; Frydrych-Szymonik, Aleksandra; Szygula, Zbigniew

    2017-06-01

    The aim of the article was to assess risk factors and to analyze methods applied in the prevention and treatment of lower limb edema in pregnant women with a particular focus on compression therapy and exercise. Fifty-four women during the early 24-hour period following delivery were assigned to two groups-either to a group with swellings of lower limbs during pregnancy, located mostly in the region of feet and lower legs (Group A, n = 42), or to a group without edema (Group B, n = 12). Two subgroups, namely A1 and A2, were additionally distinguished in Group A. Compression therapy that consisted in wearing circular-knit compression garments, usually at compression level 1 (ccl1), with three cases of compression level 2 (ccl2) was applied only in Group A1 (n = 18 women). The analysis has led to a conclusion that there is a link between the occurrence of edema during pregnancy on the one hand and the pregravidity episodes of venous conditions (vascular insufficiency and thrombosis, p < 0.05) and the lack of physical exercise during pregnancy (p = 0.01) on the other hand. However, interdependence between the occurrence of edema and the number of times a female had been pregnant, physical activity before gravidity, or body mass index before gravidity has not been identified. Only 33% of the analyzed women applied compression therapy during pregnancy; a half of them continued to apply compression during the postpartum period. Compression therapy in combination with proper physical exercises appears to be an effective means to prevent and treat venous thrombosis and lower limb edema in pregnant women, yet further research in line with the principles of evidence-based medicine is required.

  7. Improvements in Cardiovascular Risk Factors in Young Adults in a Randomized Trial of Approaches to Weight Gain Prevention

    PubMed Central

    Wing, Rena R.; Tate, Deborah F.; Garcia, Katelyn R.; Bahnson, Judy; Lewis, Cora E.; Espeland, Mark A.

    2017-01-01

    Objective Weight gain occurs commonly in young adults and increases cardiovascular (CVD) risk. We previously reported that two self-regulation interventions reduced weight gain relative to control. Here we examine whether these interventions also benefit CVD risk factors. Methods SNAP (Study of Novel Approaches to Weight Gain Prevention) was a randomized trial in 2 academic settings (N=599; 18–35 years; body mass index 21–30 kg/m2) comparing two interventions (Self-Regulation with Small Changes; Self-Regulation with Large Changes) and Control. Small Changes taught participants to make daily small changes (approximately 100 calorie) in intake and activity. Large Changes taught participants to initially lose 5–10 pounds to buffer anticipated weight gains. CVD risk factors were assessed at baseline and 2 years in 471 participants. Results Although Large Changes was associated with more beneficial changes in glucose, insulin, and HOMA-IR than Control, these differences were not significant after adjusting for multiple comparisons or 2-year weight change. Comparison of participants grouped by percent weight change baseline to 2 years showed significant differences for several CVD risk factors, with no interaction with treatment condition. Conclusions Magnitude of weight change, rather than specific weight gain prevention interventions, was related to changes in CVD risk factors in young adults. PMID:28782918

  8. An image assessment study of image acceptability of the Galileo low gain antenna mission

    NASA Technical Reports Server (NTRS)

    Chuang, S. L.; Haines, R. F.; Grant, T.; Gold, Yaron; Cheung, Kar-Ming

    1994-01-01

    This paper describes a study conducted by NASA Ames Research Center (ARC) in collaboration with the Jet Propulsion Laboratory (JPL), Pasadena, California on the image acceptability of the Galileo Low Gain Antenna mission. The primary objective of the study is to determine the impact of the Integer Cosine Transform (ICT) compression algorithm on Galilean images of atmospheric bodies, moons, asteroids and Jupiter's rings. The approach involved fifteen volunteer subjects representing twelve institutions involved with the Galileo Solid State Imaging (SSI) experiment. Four different experiment specific quantization tables (q-table) and various compression stepsizes (q-factor) to achieve different compression ratios were used. It then determined the acceptability of the compressed monochromatic astronomical images as evaluated by Galileo SSI mission scientists. Fourteen different images were evaluated. Each observer viewed two versions of the same image side by side on a high resolution monitor, each was compressed using a different quantization stepsize. They were requested to select which image had the highest overall quality to support them in carrying out their visual evaluations of image content. Then they rated both images using a scale from one to five on its judged degree of usefulness. Up to four pre-selected types of images were presented with and without noise to each subject based upon results of a previously administered survey of their image preferences. Fourteen different images in seven image groups were studied. The results showed that: (1) acceptable compression ratios vary widely with the type of images; (2) noisy images detract greatly from image acceptability and acceptable compression ratios; and (3) atmospheric images of Jupiter seem to have higher compression ratios of 4 to 5 times that of some clear surface satellite images.

  9. Gestational weight gain and its associated factors in Harari Regional State: Institution based cross-sectional study, Eastern Ethiopia.

    PubMed

    Asefa, Fekede; Nemomsa, Dereje

    2016-08-30

    Gestational weight gain is an important factor that supports optimal outcome for mothers and their infant. Whereas women who do not gain enough weight during pregnancy have a risk of bearing a baby with low birth weight, those who gain excessive weight are at increased risk of preeclampsia and gestational diabetes. Nonetheless, data on gestational weight gain and its determinants are scarce in developing countries, as it is difficult to collect the information throughout the pregnancy period. Therefore, the aim of the study was to assess weight gain during pregnancy and its associated factors. The study employed a health facility based quantitative cross-sectional study design in Harari Regional State. The study included 411 women who had given birth at health institutions from January to July of 2014. The researchers collected both primary and secondary data by using a structured questionnaire and a checklist. Using logistic regression, the factors associated with gestational weight gain were assessed and, based on the United States Institute of Medicine criteria, gestational weight gains were categorized as inadequate, adequate and excessive. The study revealed that 69.3 %, 28 %, and 2.7 % of the women gained inadequate, adequate and excess gestational weight, respectively. The mean gestational weight gain was 8.96 (SD ±3.27) kg. The factors associated with adequate gestational weight gain were body mass index ≥ 25Kg/m(2) at early pregnancy (AOR = 3.2, 95 % CI 1.6, 6.3); engaging in regular physical exercise (AOR = 2.1, 95 % CI 1.2, 3.6); Antenatal care visit of ≥4 times (AOR = 2.9, 95 % CI 1.7, 5.2); consuming fruit and vegetable (AOR = 2.7, 95 % CI 1.2, 6.6), and meat (AOR = 2.7, 95 % CI 1.1, 97.2). Generally, a small proportion of the women gained adequate gestational weight. The women who were with higher body mass index at early pregnancy, who frequently visited Antenatal care visit, and who consumed diverse food items were

  10. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  11. Integer cosine transform compression for Galileo at Jupiter: A preliminary look

    NASA Technical Reports Server (NTRS)

    Ekroot, L.; Dolinar, S.; Cheung, K.-M.

    1993-01-01

    The Galileo low-gain antenna mission has a severely rate-constrained channel over which we wish to send large amounts of information. Because of this link pressure, compression techniques for image and other data are being selected. The compression technique that will be used for images is the integer cosine transform (ICT). This article investigates the compression performance of Galileo's ICT algorithm as applied to Galileo images taken during the early portion of the mission and to images that simulate those expected from the encounter at Jupiter.

  12. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  13. Multivariable control of vapor compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, X.D.; Liu, S.; Asada, H.H.

    1999-07-01

    This paper presents the results of a study of multi-input multi-output (MIMO) control of vapor compression cycles that have multiple actuators and sensors for regulating multiple outputs, e.g., superheat and evaporating temperature. The conventional single-input single-output (SISO) control was shown to have very limited performance. A low order lumped-parameter model was developed to describe the significant dynamics of vapor compression cycles. Dynamic modes were analyzed based on the low order model to provide physical insight of system dynamic behavior. To synthesize a MIMO control system, the Linear-Quadratic Gaussian (LQG) technique was applied to coordinate compressor speed and expansion valve openingmore » with guaranteed stability robustness in the design. Furthermore, to control a vapor compression cycle over a wide range of operating conditions where system nonlinearities become evident, a gain scheduling scheme was used so that the MIMO controller could adapt to changing operating conditions. Both analytical studies and experimental tests showed that the MIMO control could significantly improve the transient behavior of vapor compression cycles compared to the conventional SISO control scheme. The MIMO control proposed in this paper could be extended to the control of vapor compression cycles in a variety of HVAC and refrigeration applications to improve system performance and energy efficiency.« less

  14. High-quality JPEG compression history detection for fake uncompressed images

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Wang, Rang-Ding; Guo, Li-Jun; Jiang, Bao-Chuan

    2017-05-01

    Authenticity is one of the most important evaluation factors of images for photography competitions or journalism. Unusual compression history of an image often implies the illicit intent of its author. Our work aims at distinguishing real uncompressed images from fake uncompressed images that are saved in uncompressed formats but have been previously compressed. To detect the potential image JPEG compression, we analyze the JPEG compression artifacts based on the tetrolet covering, which corresponds to the local image geometrical structure. Since the compression can alter the structure information, the tetrolet covering indexes may be changed if a compression is performed on the test image. Such changes can provide valuable clues about the image compression history. To be specific, the test image is first compressed with different quality factors to generate a set of temporary images. Then, the test image is compared with each temporary image block-by-block to investigate whether the tetrolet covering index of each 4×4 block is different between them. The percentages of the changed tetrolet covering indexes corresponding to the quality factors (from low to high) are computed and used to form the p-curve, the local minimum of which may indicate the potential compression. Our experimental results demonstrate the advantage of our method to detect JPEG compressions of high quality, even the highest quality factors such as 98, 99, or 100 of the standard JPEG compression, from uncompressed-format images. At the same time, our detection algorithm can accurately identify the corresponding compression quality factor.

  15. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  16. Fundamental study of compression for movie files of coronary angiography

    NASA Astrophysics Data System (ADS)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  17. AFRESh: an adaptive framework for compression of reads and assembled sequences with random access functionality.

    PubMed

    Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter

    2017-05-15

    The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16

  18. Factors that influence the tribocharging of pulverulent materials in compressed-air devices

    NASA Astrophysics Data System (ADS)

    Das, S.; Medles, K.; Mihalcioiu, A.; Beleca, R.; Dragan, C.; Dascalescu, L.

    2008-12-01

    Tribocharging of pulverulent materials in compressed-air devices is a typical multi-factorial process. This paper aims at demonstrating the interest of using the design of experiments methodology in association with virtual instrumentation for quantifying the effects of various process varaibles and of their interactions, as a prerequisite for the development of new tribocharging devices for industrial applications. The study is focused on the tribocharging of PVC powders in compressed-air devices similar to those employed in electrostatic painting. A classical 2 full-factorial design (3 factors at two levels) was employed for conducting the experiments. The response function was the charge/mass ratio of the material collected in a modified Faraday cage, at the exit of the tribocharging device. The charge/mass ratio was found to increase with the injection pressure and the vortex pressure in the tribocharging device, and to decrease with the increasing of the feed rate. In the present study an in-house design of experiments software was employed for statistical analysis of experimental data and validation of the experimental model.

  19. Using Compression Isotherms of Phospholipid Monolayers to Explore Critical Phenomena: A Biophysical Chemistry Experiment

    ERIC Educational Resources Information Center

    Gragson, Derek E.; Beaman, Dan; Porter, Rhiannon

    2008-01-01

    Two experiments are described in which students explore phase transitions and critical phenomena by obtaining compression isotherms of phospholipid monolayers using a Langmuir trough. Through relatively simple analysis of their data students gain a better understanding of compression isotherms, the application of the Clapeyron equation, the…

  20. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  1. Adaptive gain and filtering circuit for a sound reproduction system

    NASA Technical Reports Server (NTRS)

    Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)

    1998-01-01

    Adaptive compressive gain and level dependent spectral shaping circuitry for a hearing aid include a microphone to produce an input signal and a plurality of channels connected to a common circuit output. Each channel has a preset frequency response. Each channel includes a filter with a preset frequency response to receive the input signal and to produce a filtered signal, a channel amplifier to amplify the filtered signal to produce a channel output signal, a threshold register to establish a channel threshold level, and a gain circuit. The gain circuit increases the gain of the channel amplifier when the channel output signal falls below the channel threshold level and decreases the gain of the channel amplifier when the channel output signal rises above the channel threshold level. A transducer produces sound in response to the signal passed by the common circuit output.

  2. A review of lossless audio compression standards and algorithms

    NASA Astrophysics Data System (ADS)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  3. Complete chirp analysis of a gain-switched pulse using an interferometric two-photon absorption autocorrelation.

    PubMed

    Chin, Sang Hoon; Kim, Young Jae; Song, Ho Seong; Kim, Dug Young

    2006-10-10

    We propose a simple but powerful scheme for the complete analysis of the frequency chirp of a gain-switched optical pulse using a fringe-resolved interferometric two-photon absorption autocorrelator. A frequency chirp imposed on the gain-switched pulse from a laser diode was retrieved from both the intensity autocorrelation trace and the envelope of the second-harmonic interference fringe pattern. To verify the accuracy of the proposed phase retrieval method, we have performed an optical pulse compression experiment by using dispersion-compensating fibers with different lengths. We have obtained close agreement by less than a 1% error between the compressed pulse widths and numerically calculated pulse widths.

  4. Spectral compression algorithms for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2007-10-16

    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  5. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  6. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  7. DNA copy number gains at loci of growth factors and their receptors in salivary gland adenoid cystic carcinoma.

    PubMed

    Vékony, Hedy; Ylstra, Bauke; Wilting, Saskia M; Meijer, Gerrit A; van de Wiel, Mark A; Leemans, C René; van der Waal, Isaäc; Bloemena, Elisabeth

    2007-06-01

    Adenoid cystic carcinoma (ACC) is a malignant salivary gland tumor with a high mortality rate due to late, distant metastases. This study aimed at unraveling common genetic abnormalities associated with ACC. Additionally, chromosomal changes were correlated with patient characteristics and survival. Microarray-based comparative genomic hybridization was done to a series of 18 paraffin-embedded primary ACCs using a genome-wide scanning BAC array. A total of 238 aberrations were detected, representing more gains than losses (205 versus 33, respectively). Most frequent gains (>60%) were observed at 9q33.3-q34.3, 11q13.3, 11q23.3, 19p13.3-p13.11, 19q12-q13.43, 21q22.3, and 22q13.33. These loci harbor numerous growth factor [fibroblast growth factor (FGF) and platelet-derived growth factor (PDGF)] and growth factors receptor (FGFR3 and PDGFRbeta) genes. Gains at the FGF(R) regions occurred significantly more frequently in the recurred/metastasized ACCs compared with indolent ACCs. Furthermore, patients with 17 or more chromosomal aberrations had a significantly less favorable outcome than patients with fewer chromosomal aberrations (log-rank = 5.2; P = 0.02). Frequent DNA copy number gains at loci of growth factors and their receptors suggest their involvement in ACC initiation and progression. Additionally, the presence of FGFR3 and PDGFRbeta in increased chromosomal regions suggests a possible role for autocrine stimulation in ACC tumorigenesis.

  8. DNABIT Compress – Genome compression algorithm

    PubMed Central

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  9. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  10. Proposed data compression schemes for the Galileo S-band contingency mission

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Tong, Kevin

    1993-01-01

    The Galileo spacecraft is currently on its way to Jupiter and its moons. In April 1991, the high gain antenna (HGA) failed to deploy as commanded. In case the current efforts to deploy the HGA fails, communications during the Jupiter encounters will be through one of two low gain antenna (LGA) on an S-band (2.3 GHz) carrier. A lot of effort has been and will be conducted to attempt to open the HGA. Also various options for improving Galileo's telemetry downlink performance are being evaluated in the event that the HGA will not open at Jupiter arrival. Among all viable options the most promising and powerful one is to perform image and non-image data compression in software onboard the spacecraft. This involves in-flight re-programming of the existing flight software of Galileo's Command and Data Subsystem processors and Attitude and Articulation Control System (AACS) processor, which have very limited computational and memory resources. In this article we describe the proposed data compression algorithms and give their respective compression performance. The planned image compression algorithm is a 4 x 4 or an 8 x 8 multiplication-free integer cosine transform (ICT) scheme, which can be viewed as an integer approximation of the popular discrete cosine transform (DCT) scheme. The implementation complexity of the ICT schemes is much lower than the DCT-based schemes, yet the performances of the two algorithms are indistinguishable. The proposed non-image compression algorith is a Lempel-Ziv-Welch (LZW) variant, which is a lossless universal compression algorithm based on a dynamic dictionary lookup table. We developed a simple and efficient hashing function to perform the string search.

  11. Loaded delay lines for future RF pulse compression systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, R.M.; Wilson, P.B.; Kroll, N.M.

    1995-05-01

    The peak power delivered by the klystrons in the NLCRA (Next Linear Collider Test Accelerator) now under construction at SLAC is enhanced by a factor of four in a SLED-II type of R.F. pulse compression system (pulse width compression ratio of six). To achieve the desired output pulse duration of 250 ns, a delay line constructed from a 36 m length of circular waveguide is used. Future colliders, however, will require even higher peak power and larger compression factors, which favors a more efficient binary pulse compression approach. Binary pulse compression, however, requires a line whose delay time is approximatelymore » proportional to the compression factor. To reduce the length of these lines to manageable proportions, periodically loaded delay lines are being analyzed using a generalized scattering matrix approach. One issue under study is the possibility of propagating two TE{sub o} modes, one with a high group velocity and one with a group velocity of the order 0.05c, for use in a single-line binary pulse compression system. Particular attention is paid to time domain pulse degradation and to Ohmic losses.« less

  12. Quality evaluation of motion-compensated edge artifacts in compressed video.

    PubMed

    Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R

    2007-04-01

    Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.

  13. The development and use of SPIO Lycra compression bracing in children with neuromotor deficits.

    PubMed

    Hylton, N; Allen, C

    1997-01-01

    The use of flexible compression bracing in persons with neuromotor deficits offers improved possibilities for stability and movement control without severely limiting joint movement options. At the Children's Therapy Center in Kent, Washington, this treatment modality has been explored with increasing application in children with moderate to severe cerebral palsy and other neuromotor deficits over the past 6 years, with good success. Significant functional improvements using Neoprene shoulder/trunk/hip Bracing led us to experiment with much lighter compression materials. The stabilizing pressure input orthosis or SPIO bracing system (developed by Cheryl Allen, parent and Chief Designer, and Nancy Hylton, PT) is custom-fitted to the stability, movement control and sensory deficit needs of a specific individual. SPIO bracing developed for a specific child has often become part of a rapidly increasing group of flexible bracing options which appear to provide an improved base of support for functional gains in balance, dynamic stability, general and specific movement control with improved postural and muscle readiness. Both deep sensory and subtle biomechanical factors may account for the functional changes observed. This article discusses the development and current use of flexible compression SPIO bracing in this area.

  14. Factors Associated With Women's Plans to Gain Weight Categorized as Above or Below the National Guidelines During Pregnancy.

    PubMed

    Park, Christina K; Timm, Valerie; Neupane, Binod; Beyene, Joseph; Schmidt, Louis A; McDonald, Sarah D

    2015-03-01

    Given that planning to gain gestational weight categorized as above the national guidelines is associated with actually gaining above the guidelines, we sought to identify physical, lifestyle, knowledge, and psychological factors associated with planned weight gain. Using a piloted, self-administered questionnaire, a cross-sectional study of women with singleton pregnancies was conducted. Women's plans for weight gain were categorized as above, within, or below the guidelines. Univariate and multivariate analyses were performed. The response rate was 90.7% (n = 330). Compared with women whose plans to gain weight were within the guidelines, women whose plans to gain were above the guidelines were more likely to be older (adjusted odds ratio [aOR] 1.09 per year; 95% CI 1.03 to 1.16), to have a greater pre-pregnancy BMI (aOR 1.17 per unit of BMI; 95% CI 1.10 to 1.25), to drink more than one glass of soft drink or juice per day (aOR 2.73; 95% CI 1.27 to 5.87), and to report receiving a recommendation by their care provider to gain weight above the guidelines (aOR 5.46; 95% CI 1.56 to 19.05). Women whose plans to gain weight were categorized as below the guidelines were more likely to eat lunch in front of a screen (aOR 2.27; 95% CI 1.11 to 4.66) and to aspire to greater social desirability (aOR 2.51; 95% CI 1.01 to 6.22). Modifiable factors associated with planned gestational weight gain categorized as above the guidelines included soft drink or juice consumption and having a recommendation from a care provider, while planned weight gain categorized as below the guidelines was associated with eating lunch in front of a screen and social desirability.

  15. Nova Upgrade: A proposed ICF facility to demonstrate ignition and gain, revision 1

    NASA Astrophysics Data System (ADS)

    1992-07-01

    The present objective of the national Inertial Confinement Fusion (ICF) Program is to determine the scientific feasibility of compressing and heating a small mass of mixed deuterium and tritium (DT) to conditions at which fusion occurs and significant energy is released. The potential applications of ICF will be determined by the resulting fusion energy yield (amount of energy produced) and gain (ratio of energy released to energy required to heat and compress the DT fuel). Important defense and civilian applications, including weapons physics, weapons effects simulation, and ultimately the generation of electric power will become possible if yields of 100 to 1,000 MJ and gains exceeding approximately 50 can be achieved. Once ignition and propagating bum producing modest gain (2 to 10) at moderate drive energy (1 to 2 MJ) has been achieved, the extension to high gain (greater than 50) is straightforward. Therefore, the demonstration of ignition and modest gain is the final step in establishing the scientific feasibility of ICF. Lawrence Livermore National Laboratory (LLNL) proposes the Nova Upgrade Facility to achieve this demonstration by the end of the decade. This facility would be constructed within the existing Nova building at LLNL for a total cost of approximately $400 M over the proposed FY 1995-1999 construction period. This report discusses this facility.

  16. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    NASA Astrophysics Data System (ADS)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  17. Cement Leakage in Percutaneous Vertebral Augmentation for Osteoporotic Vertebral Compression Fractures: Analysis of Risk Factors.

    PubMed

    Xie, Weixing; Jin, Daxiang; Ma, Hui; Ding, Jinyong; Xu, Jixi; Zhang, Shuncong; Liang, De

    2016-05-01

    The risk factors for cement leakage were retrospectively reviewed in 192 patients who underwent percutaneous vertebral augmentation (PVA). To discuss the factors related to the cement leakage in PVA procedure for the treatment of osteoporotic vertebral compression fractures. PVA is widely applied for the treatment of osteoporotic vertebral fractures. Cement leakage is a major complication of this procedure. The risk factors for cement leakage were controversial. A retrospective review of 192 patients who underwent PVA was conducted. The following data were recorded: age, sex, bone density, number of fractured vertebrae before surgery, number of treated vertebrae, severity of the treated vertebrae, operative approach, volume of injected bone cement, preoperative vertebral compression ratio, preoperative local kyphosis angle, intraosseous clefts, preoperative vertebral cortical bone defect, and ratio and type of cement leakage. To study the correlation between each factor and cement leakage ratio, bivariate regression analysis was employed to perform univariate analysis, whereas multivariate linear regression analysis was employed to perform multivariate analysis. The study included 192 patients (282 treated vertebrae), and cement leakage occurred in 100 vertebrae (35.46%). The vertebrae with preoperative cortical bone defects generally exhibited higher cement leakage ratio, and the leakage is typically type C. Vertebrae with intact cortical bones before the procedure tend to experience type S leakage. Univariate analysis showed that patient age, bone density, number of fractured vertebrae before surgery, and vertebral cortical bone were associated with cement leakage ratio (P<0.05). Multivariate analysis showed that the main factors influencing bone cement leakage are bone density and vertebral cortical bone defect, with standardized partial regression coefficients of -0.085 and 0.144, respectively. High bone density and vertebral cortical bone defect are

  18. Postpartum Depressive Symptoms: Gestational Weight Gain as a Risk Factor for Adolescents Who Are Overweight or Obese.

    PubMed

    Cunningham, Shayna D; Mokshagundam, Shilpa; Chai, Hannah; Lewis, Jessica B; Levine, Jessica; Tobin, Jonathan N; Ickovics, Jeannette R

    2018-03-01

    Obesity is a risk factor for adverse physical health outcomes during pregnancy. Much less is known about the association between obesity and maternal mental health. Evidence suggests that prenatal depression is associated with excessive weight gain during pregnancy and that this relationship may vary according to pregravid body mass index (BMI). Young women may be particularly vulnerable to postpartum depression. The objective of this study is to examine the association between prepregnancy BMI, gestational weight gain, and postpartum depressive symptoms among adolescents. Participants were 505 pregnant adolescents aged 14 to 21 years followed during pregnancy and 6 months postpartum. Data were collected via interviews and medical record abstraction. Multilevel linear mixed models were used to test the association between excessive gestational weight gain as defined by National Academy of Medicine Guidelines and postpartum depressive symptoms measured via the validated Center for Epidemiologic Studies Depression (CES-D) scale. Analyses controlled for sociodemographic factors (maternal age, race, ethnicity, relationship status), health behaviors (nutrition, physical activity), prenatal depressive symptoms, and postpartum weight retention. Prepregnancy BMI was classified as follows: 11% underweight, 53% healthy weight, 19% overweight, and 18% obese. One-half (50%) of participants exceeded recommended guidelines for gestational weight gain. Adolescents with excessive gestational weight gain who entered pregnancy overweight or obese had significantly higher postpartum depressive symptoms (β, 2.41; SE, 1.06 vs β, 2.58; SE, 1.08, respectively; both P < .05) compared with those with healthy prepregnancy BMI and appropriate gestational weight gain. Adolescents who gained gestational weight within clinically recommended guidelines were not at risk for increased depressive symptoms. Adolescents who enter pregnancy overweight or obese and experience excessive weight gain

  19. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  20. Bioavailable Insulin-Like Growth Factor-I Inversely Related to Weight Gain in Postmenopausal Women regardless of Exogenous Estrogen

    PubMed Central

    Jung, Su Yon; Hursting, Stephen D.; Guindani, Michele; Vitolins, Mara Z.; Paskett, Electra; Chang, Shine

    2014-01-01

    Background Weight gain, insulin-like growth factor-I (IGF-I) levels, and excess exogenous steroid hormone use are putative cancer risk factors, yet their interconnected pathways have not been fully characterized. This cross-sectional study investigated the relationship between plasma IGF-I levels and weight gain according to body mass index (BMI), leptin levels, and exogenous estrogen use among postmenopausal women. Methods This study included 794 postmenopausal women who enrolled in an ancillary study of the Women's Health Initiative Observational Study between February 1995 and July 1998. The relationship between IGF-I levels and weight gain was analyzed using ordinal logistic regression. We used the molar ratio of IGF-I to IGF binding protein-3 (IGF-I/IGFBP-3) or circulating IGF-I levels adjusting for IGFBP-3 as a proxy of bioavailable IGF-I. The plasma concentrations were expressed as quartiles. Results Among the obese group, women in the third quartile (Q3) of IGF-I and highest quartile of IGF-I/IGFBP-3 were less likely to gain weight (>3% from baseline) than were women in the first quartiles (Q1). Among the normal weight group, women in Q2 and Q3 of IGF-I/IGFBP-3 were 70% less likely than those in Q1 to gain weight. Among current estrogen users, Q3 of IGF-I/IGFBP-3 had 0.5 times the odds of gaining weight than Q1. Conclusions Bioavailable IGF-I levels were inversely related to weight gain overall. Impact Although weight gain was not consistent with increases in IGF-I levels among postmenopausal women in this report, avoidance of weight gain as a strategy to reduce cancer risk may be recommend. PMID:24363252

  1. Wearable EEG via lossless compression.

    PubMed

    Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2016-08-01

    This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.

  2. Pressure prediction model for compression garment design.

    PubMed

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  3. The Significance of Education for Mortality Compression in the United States*

    PubMed Central

    Brown, Dustin C.; Hayward, Mark D.; Montez, Jennifer Karas; Humme, Robert A.; Chiu, Chi-Tsun; Hidajat, Mira M.

    2012-01-01

    Recent studies of old-age mortality trends assess whether longevity improvements over time are linked to increasing compression of mortality at advanced ages. The historical backdrop of these studies is the long-term improvements in a population's socioeconomic resources that fueled longevity gains. We extend this line of inquiry by examining whether socioeconomic differences in longevity within a population are accompanied by old-age mortality compression. Specifically, we document educational differences in longevity and mortality compression for older men and women in the United States. Drawing on the fundamental cause of disease framework, we hypothesize that both longevity and compression increase with higher levels of education and that women with the highest levels of education will exhibit the greatest degree of longevity and compression. Results based on the Health and Retirement Study and the National Health Interview Survey Linked Mortality File confirm a strong educational gradient in both longevity and mortality compression. We also find that mortality is more compressed within educational groups among women than men. The results suggest that educational attainment in the United States maximizes life chances by delaying the biological aging process. PMID:22556045

  4. Dynamic compression of rabbit adipose-derived stem cells transfected with insulin-like growth factor 1 in chitosan/gelatin scaffolds induces chondrogenesis and matrix biosynthesis.

    PubMed

    Li, Jianjun; Zhao, Qun; Wang, Enbo; Zhang, Chuanhui; Wang, Guangbin; Yuan, Quan

    2012-05-01

    Articular cartilage is routinely subjected to mechanical forces and growth factors. Adipose-derived stem cells (ASCs) are multi-potent adult stem cells and capable of chondrogenesis. In the present study, we investigated the comparative and interactive effects of dynamic compression and insulin-like growth factor-I (IGF-I) on the chondrogenesis of rabbit ASCs in chitosan/gelatin scaffolds. Rabbit ASCs with or without a plasmid overexpressing of human IGF-1 were cultured in chitosan/gelatin scaffolds for 2 days, then subjected to cyclic compression with 5% strain and 1 Hz for 4 h per day for seven consecutive days. Dynamic compression induced chondrogenesis of rabbit ASCs by activating calcium signaling pathways and up-regulating the expression of Sox-9. Dynamic compression plus IGF-1 overexpression up-regulated expression of chondrocyte-specific extracellular matrix genes including type II collagen, Sox-9, and aggrecan with no effect on type X collagen expression. Furthermore, dynamic compression and IGF-1 expression promoted cellular proliferation and the deposition of proteoglycan and collagen. Intracellular calcium ion concentration and peak currents of Ca(2+) ion channels were consistent with chondrocytes. The tissue-engineered cartilage from this process had excellent mechanical properties. When applied together, the effects achieved by the two stimuli (dynamic compression and IGF-1) were greater than those achieved by either stimulus alone. Our results suggest that dynamic compression combined with IGF-1 overexpression might benefit articular cartilage tissue engineering in cartilage regeneration. Copyright © 2011 Wiley Periodicals, Inc.

  5. Clinical risk factors for weight gain during psychopharmacologic treatment of depression: results from 2 large German observational studies.

    PubMed

    Kloiber, Stefan; Domschke, Katharina; Ising, Marcus; Arolt, Volker; Baune, Bernhard T; Holsboer, Florian; Lucae, Susanne

    2015-06-01

    Weight gain during psychopharmacologic treatment has considerable impact on the clinical management of depression, treatment continuation, and risk for metabolic disorders. As no profound clinical risk factors have been identified so far, the aim of our analyses was to determine clinical risk factors associated with short-term weight development in 2 large observational psychopharmacologic treatment studies for major depression. Clinical variables at baseline (age, gender, depression psychopathology, anthropometry, disease history, and disease entity) were analyzed for association with percent change in body mass index (BMI; normal range, 18.5 to 25 kg/m(2)) during 5 weeks of naturalistic psychopharmacologic treatment in patients who had a depressive episode as single depressive episode, in the course of recurrent unipolar depression or bipolar disorder according to DSM-IV criteria. 703 patients participated in the Munich Antidepressant Response Signature (MARS) project, an ongoing study since 2002, and 214 patients participated in a study conducted at the University of Muenster from 2004 to 2006 in Germany. Lower BMI, weight-increasing side effects of medication, severity of depression, and psychotic symptoms could be identified as clinical risk factors associated with elevated weight gain during the initial treatment phase of 5 weeks in both studies. Based on these results, a composite risk score for weight gain consisting of BMI ≤ 25 kg/m(2), Hamilton Depression Rating Scale (17-item) score > 20, presence of psychotic symptoms, and administration of psychopharmacologic medication with potential weight-gaining side effects was highly discriminative for mean weight gain (F4,909 = 26.77, P = 5.14E-21) during short-term psychopharmacologic treatment. On the basis of our results, depressed patients with low to normal BMI, severe depression, or psychotic symptoms should be considered at higher risk for weight gain during acute antidepressant treatment. We introduce

  6. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  7. Neurofilaments Function as Shock Absorbers: Compression Response Arising from Disordered Proteins.

    PubMed

    Kornreich, Micha; Malka-Gibor, Eti; Zuker, Ben; Laser-Azogui, Adi; Beck, Roy

    2016-09-30

    What can cells gain by using disordered, rather than folded, proteins in the architecture of their skeleton? Disordered proteins take multiple coexisting conformations, and often contain segments which act as random-walk-shaped polymers. Using x-ray scattering we measure the compression response of disordered protein hydrogels, which are the main stress-responsive component of neuron cells. We find that at high compression their mechanics are dominated by gaslike steric and ionic repulsions. At low compression, specific attractive interactions dominate. This is demonstrated by the considerable hydrogel expansion induced by the truncation of critical short protein segments. Accordingly, the floppy disordered proteins form a weakly cross-bridged hydrogel, and act as shock absorbers that sustain large deformations without failure.

  8. Neurofilaments Function as Shock Absorbers: Compression Response Arising from Disordered Proteins

    NASA Astrophysics Data System (ADS)

    Kornreich, Micha; Malka-Gibor, Eti; Zuker, Ben; Laser-Azogui, Adi; Beck, Roy

    2016-09-01

    What can cells gain by using disordered, rather than folded, proteins in the architecture of their skeleton? Disordered proteins take multiple coexisting conformations, and often contain segments which act as random-walk-shaped polymers. Using x-ray scattering we measure the compression response of disordered protein hydrogels, which are the main stress-responsive component of neuron cells. We find that at high compression their mechanics are dominated by gaslike steric and ionic repulsions. At low compression, specific attractive interactions dominate. This is demonstrated by the considerable hydrogel expansion induced by the truncation of critical short protein segments. Accordingly, the floppy disordered proteins form a weakly cross-bridged hydrogel, and act as shock absorbers that sustain large deformations without failure.

  9. Experimental investigation of the mass flow gain factor in a draft tube with cavitation vortex rope

    NASA Astrophysics Data System (ADS)

    Landry, C.; Favrel, A.; Müller, A.; Yamamoto, K.; Alligné, S.; Avellan, F.

    2017-04-01

    At off-design operating operations, cavitating flow is often observed in hydraulic machines. The presence of a cavitation vortex rope may induce draft tube surge and electrical power swings at part load and full load operations. The stability analysis of these operating conditions requires a numerical pipe model taking into account the complexity of the two-phase flow. Among the hydroacoustic parameters describing the cavitating draft tube flow in the numerical model, the mass flow gain factor, representing the mass excitation source expressed as the rate of change of the cavitation volume as a function of the discharge, remains difficult to model. This paper presents a quasi-static method to estimate the mass flow gain factor in the draft tube for a given cavitation vortex rope volume in the case of a reduced scale physical model of a ν = 0.27 Francis turbine. The methodology is based on an experimental identification of the natural frequency of the test rig hydraulic system for different Thoma numbers. With the identification of the natural frequency, it is possible to model the wave speed, the cavitation compliance and the volume of the cavitation vortex rope. By applying this new methodology for different discharge values, it becomes possible to identify the mass flow gain factor and improve the accuracy of the system stability analysis.

  10. Alternative Fuels Data Center: Little Rock Gains Momentum with Natural Gas

    Science.gov Websites

    BusesA> Little Rock Gains Momentum with Natural Gas Buses to someone by E-mail Share Alternative on compressed natural gas. For information about this project, contact Arkansas Clean Cities Public Television Related Videos Photo of a car Hydrogen Powers Fuel Cell Vehicles in California Nov. 18

  11. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance

  12. Two-stage optical parametric chirped-pulse amplifier using sub-nanosecond pump pulse generated by stimulated Brillouin scattering compression

    NASA Astrophysics Data System (ADS)

    Ogino, Jumpei; Miyamoto, Sho; Matsuyama, Takahiro; Sueda, Keiichi; Yoshida, Hidetsugu; Tsubakimoto, Koji; Miyanaga, Noriaki

    2014-12-01

    We demonstrate optical parametric chirped-pulse amplification (OPCPA) based on two-beam pumping, using sub-nanosecond pulses generated by stimulated Brillouin scattering compression. Seed pulse energy, duration, and center wavelength were 5 nJ, 220 ps, and ˜1065 nm, respectively. The 532 nm pulse from a Q-switched Nd:YAG laser was compressed to ˜400 ps in heavy fluorocarbon FC-40 liquid. Stacking of two time-delayed pump pulses reduced the amplifier gain fluctuation. Using a walk-off-compensated two-stage OPCPA at a pump energy of 34 mJ, a total gain of 1.6 × 105 was obtained, yielding an output energy of 0.8 mJ. The amplified chirped pulse was compressed to 97 fs.

  13. Cost-Effectiveness Analysis of Percutaneous Vertebroplasty for Osteoporotic Compression Fractures.

    PubMed

    Takura, Tomoyuki; Yoshimatsu, Misako; Sugimori, Hiroki; Takizawa, Kenji; Furumatsu, Yoshiyuki; Ikeda, Hirotaka; Kato, Hiroshi; Ogawa, Yukihisa; Hamaguchi, Shingo; Fujikawa, Atsuko; Satoh, Toshihiko; Nakajima, Yasuo

    2017-04-01

    Single-center, single-arm, prospective time-series study. To assess the cost-effectiveness and improvement in quality of life (QOL) of percutaneous vertebroplasty (PVP). PVP is known to relieve back pain and increase QOL for osteoporotic compression fractures. However, the economic value of PVP has never been evaluated in Japan where universal health care system is adopted. We prospectively followed up 163 patients with acute vertebral osteoporotic compression fractures, 44 males aged 76.4±6.0 years and 119 females aged 76.8±7.1 years, who underwent PVP. To measure health-related QOL and pain during 52 weeks observation, we used the European Quality of Life-5 Dimensions (EQ-5D), the Rolland-Morris Disability Questionnaire (RMD), the 8-item Short-Form health survey (SF-8), and visual analogue scale (VAS). Quality-adjusted life years (QALY) were calculated using the change of health utility of EQ-5D. The direct medical cost was calculated by accounting system of the hospital and Japanese health insurance system. Cost-effectiveness was analyzed using incremental cost-effectiveness ratio (ICER): Δ medical cost/Δ QALY. After PVP, improvement in EQ-5D, RMD, SF-8, and VAS scores were observed. The gain of QALY until 52 weeks was 0.162. The estimated lifetime gain of QALY reached 1.421. The direct medical cost for PVP was ¥286,740 (about 3061 US dollars). Cost-effectiveness analysis using ICER showed that lifetime medical cost for a gain of 1 QALY was ¥201,748 (about 2154 US dollars). Correlations between changes in EQ-5D scores and other parameters such as RMD, SF-8, and VAS were observed during most of the study period, which might support the reliability and applicability to measure health utilities by EQ-5D for osteoporotic compression fractures in Japan as well. PVP may improve QOL and ameliorate pain for acute osteoporotic compression fractures and be cost-effective in Japan.

  14. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    PubMed

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (p<0.001 for both). Linear increase of average chest compression depth following the increase of the rate of chest compression was observed only with normal down-stroke pattern (p=0.004). Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation.

  15. Effect of compressibility on the hypervelocity penetration

    NASA Astrophysics Data System (ADS)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  16. Improved integral images compression based on multi-view extraction

    NASA Astrophysics Data System (ADS)

    Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric

    2016-09-01

    Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.

  17. Optical properties of highly compressed polystyrene: An ab initio study

    NASA Astrophysics Data System (ADS)

    Hu, S. X.; Collins, L. A.; Colgan, J. P.; Goncharov, V. N.; Kilcrease, D. P.

    2017-10-01

    Using all-electron density functional theory, we have performed an ab initio study on x-ray absorption spectra of highly compressed polystyrene (CH). We found that the K -edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K -edge shift in warm, dense CH, we have developed a model designated as "single mixture in a box" (SMIAB), which incorporates both the lowering of the continuum and the rising of the Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K -edge shift of carbon in highly compressed CH in good agreement with results from quantum molecular dynamics (QMD) calculations. Traditional opacity models failed to give the proper K -edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [ρ =0.1 -100 g /c m3 and T =2000 -1 000 000 K ]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity-patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos atomic model for moderately compressed CH (ρCH≤10 g /c m3 ), but remains a factor of 2 to 3 higher at extremely high densities (ρCH≥50 g /c m3 ). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K -edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.

  18. Optical properties of highly compressed polystyrene: An ab initio study

    DOE PAGES

    Hu, S. X.; Collins, L. A.; Colgan, J. P.; ...

    2017-10-16

    Using all-electron density functional theory, we have performed an ab initio study on x ray absorption spectra of highly compressed polystyrene (CH). Here, we found that the K-edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K edge shift in warm, dense CH, we have developed a model designated as “single-mixture-in-a-box” (SMIAB), which incorporates both the lowering of continuum and the rising of Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K-edge shift of carbon in highly compressed CH inmore » good agreement with results from quantum-molecular-dynamics (QMD) calculations. Traditional opacity models failed to give the proper K-edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [p = 0.1 to 100 g/cm 3 and T = 2000 to 1,000,000 K]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity–patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos ATOMIC model for moderately compressed CH (pCH ≤10 g/cm 3) but remains a factor of 2 to 3 higher at extremely high densities (pCH ≥ 50 g/cm 3). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K-edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.« less

  19. Optical properties of highly compressed polystyrene: An ab initio study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, S. X.; Collins, L. A.; Colgan, J. P.

    Using all-electron density functional theory, we have performed an ab initio study on x ray absorption spectra of highly compressed polystyrene (CH). Here, we found that the K-edge shifts in strongly coupled, degenerate polystyrene cannot be explained by existing continuum-lowering models adopted in traditional plasma physics. To gain insights into the K edge shift in warm, dense CH, we have developed a model designated as “single-mixture-in-a-box” (SMIAB), which incorporates both the lowering of continuum and the rising of Fermi surface resulting from high compression. This simple SMIAB model correctly predicts the K-edge shift of carbon in highly compressed CH inmore » good agreement with results from quantum-molecular-dynamics (QMD) calculations. Traditional opacity models failed to give the proper K-edge shifts as the CH density increased. Based on QMD calculations, we have established a first-principles opacity table (FPOT) for CH in a wide range of densities and temperatures [p = 0.1 to 100 g/cm 3 and T = 2000 to 1,000,000 K]. The FPOT gives much higher Rosseland mean opacity compared to the cold-opacity–patched astrophysics opacity table for warm, dense CH and favorably compares to the newly improved Los Alamos ATOMIC model for moderately compressed CH (pCH ≤10 g/cm 3) but remains a factor of 2 to 3 higher at extremely high densities (pCH ≥ 50 g/cm 3). We anticipate the established FPOT of CH will find important applications to reliable designs of high-energy-density experiments. Moreover, the understanding of K-edge shifting revealed in this study could provide guides for improving the traditional opacity models to properly handle the strongly coupled and degenerate conditions.« less

  20. Lifestyle Risk Factors for Weight Gain in Children with and without Asthma

    PubMed Central

    Jensen, Megan E.; Gibson, Peter G.; Collins, Clare E.; Hilton, Jodi M.; Wood, Lisa G.

    2017-01-01

    A higher proportion of children with asthma are overweight and obese compared to children without asthma; however, it is unknown whether asthmatic children are at increased risk of weight gain due to modifiable lifestyle factors. Thus, the aim of this cross-sectional study was to compare weight-gain risk factors (sleep, appetite, diet, activity) in an opportunistic sample of children with and without asthma. Non-obese children with (n = 17; age 10.7 (2.4) years) and without asthma (n = 17; age 10.8 (2.3) years), referred for overnight polysomnography, underwent measurement of lung function, plasma appetite hormones, dietary intake and food cravings, activity, and daytime sleepiness. Sleep latency (56.6 (25.5) vs. 40.9 (16.9) min, p = 0.042) and plasma triglycerides (1.0 (0.8, 1.2) vs. 0.7 (0.7, 0.8) mmol/L, p = 0.013) were significantly greater in asthmatic versus non-asthmatic children. No group difference was observed in appetite hormones, dietary intake, or activity levels (p > 0.05). Sleep duration paralleled overall diet quality (r = 0.36, p = 0.04), whilst daytime sleepiness paralleled plasma lipids (r = 0.61, p =0.001) and sedentary time (r = 0.39, p = 0.02). Disturbances in sleep quality and plasma triglycerides were evident in non-obese asthmatic children referred for polysomnography, versus non-asthmatic children. Observed associations between diet quality, sedentary behavior, and metabolic and sleep-related outcomes warrant further investigation, particularly the long-term health implications. PMID:28245609

  1. Investigations of gain redshift in high peak power Ti:sapphire laser systems

    NASA Astrophysics Data System (ADS)

    Wu, Fenxiang; Yu, Linpeng; Zhang, Zongxin; Li, Wenkai; Yang, Xiaojun; Wu, Yuanfeng; Li, Shuai; Wang, Cheng; Liu, Yanqi; Lu, Xiaoming; Xu, Yi; Leng, Yuxin

    2018-07-01

    Gain redshift in high peak power Ti:sapphire laser systems can result in narrowband spectral output and hence lengthen the compressed pulse duration. In order to realize broadband spectral output in 10 PW-class Ti:sapphire lasers, the influence on gain redshift induced by spectral pre-shaping, gain distribution of cascaded amplifiers and Extraction During Pumping (EDP) technique have been investigated. The theoretical and experimental results show that the redshift of output spectrum is sensitive to the spectral pre-shaping and the gain distribution of cascaded amplifiers, while insensitive to the pumping scheme with or without EDP. Moreover, the output spectrum from our future 10 PW Ti:sapphire laser is theoretically analyzed based on the investigations above, which indicates that a Fourier-transform limited (FTL) pulse duration of 21 fs can be achieved just by optimizing the spectral pre-shaping and gain distribution in 10 PW-class Ti:sapphire lasers.

  2. Video compression via log polar mapping

    NASA Astrophysics Data System (ADS)

    Weiman, Carl F. R.

    1990-09-01

    A three stage process for compressing real time color imagery by factors in the range of 1600-to-i is proposed for remote driving'. The key is to match the resolution gradient of human vision and preserve only those cues important for driving. Some hardware components have been built and a research prototype is planned. Stage 1 is log polar mapping, which reduces peripheral image sampling resolution to match the peripheral gradient in human visual acuity. This can yield 25-to-i compression. Stage 2 partitions color and contrast into separate channels. This can yield 8-to-i compression. Stage 3 is conventional block data compression such as hybrid DCT/DPCM which can yield 8-to-i compression. The product of all three stages is i600-to-i data compression. The compressed signal can be transmitted over FM bands which do not require line-of-sight, greatly increasing the range of operation and reducing the topographic exposure of teleoperated vehicles. Since the compressed channel data contains the essential constituents of human visual perception, imagery reconstructed by inverting each of the three compression stages is perceived as complete, provided the operator's direction of gaze is at the center of the mapping. This can be achieved by eye-tracker feedback which steers the center of log polar mapping in the remote vehicle to match the teleoperator's direction of gaze.

  3. Coating process optimization through in-line monitoring for coating weight gain using Raman spectroscopy and design of experiments.

    PubMed

    Kim, Byungsuk; Woo, Young-Ah

    2018-05-30

    In this study the authors developed a real-time Process Analytical Technology (PAT) of a coating process by applying in-line Raman spectroscopy to evaluate the coating weight gain, which is a quantitative analysis of the film coating layer. The wide area illumination (WAI) Raman probe was connected to the pan coater for real-time monitoring of changes in the weight gain of coating layers. Under the proposed in-line Raman scheme, a non-contact, non-destructive analysis was performed using WAI Raman probes with a spot size of 6 mm. The in-line Raman probe maintained a focal length of 250 mm, and a compressed air line was designed to protect the lens surface from spray droplets. The Design of Experiment (DOE) was applied to identify factors affecting the Raman spectra background of laser irradiation. The factors selected for DOE were the strength of compressed air connected to the probe, and the shielding of light by the transparent door connecting the probe to the pan coater. To develop a quantitative model, partial least squares (PLS) models as multivariate calibration were developed based on the three regions showing the specificity of TiO 2 individually or in combination. For the three single peaks (636 cm -1 , 512 cm -1 , 398 cm -1 ), least squares method (LSM) was applied to develop three univariate quantitative analysis models. One of best multivariate quantitative model having a factor of 1 gave the lowest RMSEP of 0.128, 0.129, and 0.125, respectively for prediction batches. When LSM was applied to the single peak at 636 cm -1 , the univariate quantitative model with an R 2 of 0.9863, slope of 0.5851, and y-intercept of 0.8066 had the lowest RMSEP of 0.138, 0.144, and 0.153, respectively for prediction batches. The in-line Raman spectroscopic method for the analysis of coating weight gain was verified by considering system suitability and parameters such as specificity, range, linearity, accuracy, and precision in accordance with ICH Q2 regarding

  4. Flynn Effects on Sub-Factors of Episodic and Semantic Memory: Parallel Gains over Time and the Same Set of Determining Factors

    ERIC Educational Resources Information Center

    Ronnlund, Michael; Nilsson, Lars-Goran.

    2009-01-01

    The study examined the extent to which time-related gains in cognitive performance, so-called Flynn effects, generalize across sub-factors of episodic memory (recall and recognition) and semantic memory (knowledge and fluency). We conducted time-sequential analyses of data drawn from the Betula prospective cohort study, involving four age-matched…

  5. HbA1c and Gestational Weight Gain Are Factors that Influence Neonatal Outcome in Mothers with Gestational Diabetes.

    PubMed

    Barquiel, Beatriz; Herranz, Lucrecia; Hillman, Natalia; Burgos, Ma Ángeles; Grande, Cristina; Tukia, Keleni M; Bartha, José Luis; Pallardo, Luis Felipe

    2016-06-01

    Maternal glucose and weight gain are related to neonatal outcome in women with gestational diabetes mellitus (GDM). The aim of this study was to explore the influence of average third-trimester HbA1c and excess gestational weight gain on GDM neonatal complications. This observational study included 2037 Spanish singleton pregnant women with GDM followed in our Diabetes and Pregnancy Unit. The maternal HbA1c level was measured monthly from GDM diagnosis to delivery. Women were compared by average HbA1c level and weight gain categorized into ≤ or > the current Institute of Medicine (IOM) recommendations for body mass index. The differential effects of these factors on large-for-gestational-age birth weight and a composite of neonatal complications were assessed. Women with an average third-trimester HbA1c ≥5.0% (n = 1319) gave birth to 7.3% versus 3.8% (p = 0.005) of large-for-gestational-age neonates and 22.0% versus 16.0% (p = 0.006) of neonates with complications. Women with excess gestational weight gain (n = 299) delivered 12.5% versus 5.2% (p < 0.001) of large-for-gestational-age neonates and 24.7% versus 19.0% (p = 0.022) of neonates with complications. In an adjusted multiple logistic regression analysis among mothers exposed to the respective risk factors, ∼47% and 52% of large-for-gestational-age neonates and 32% and 37% of neonatal complications were potentially preventable by attaining an average third-trimester HbA1c level <5.0% and optimizing gestational weight gain. Average third-trimester HbA1c level ≥5% and gestational weight gain above the IOM recommendation are relevant risk factors for neonatal complications in mothers with gestational diabetes.

  6. MFCompress: a compression tool for FASTA and multi-FASTA data.

    PubMed

    Pinho, Armando J; Pratas, Diogo

    2014-01-01

    The data deluge phenomenon is becoming a serious problem in most genomic centers. To alleviate it, general purpose tools, such as gzip, are used to compress the data. However, although pervasive and easy to use, these tools fall short when the intention is to reduce as much as possible the data, for example, for medium- and long-term storage. A number of algorithms have been proposed for the compression of genomics data, but unfortunately only a few of them have been made available as usable and reliable compression tools. In this article, we describe one such tool, MFCompress, specially designed for the compression of FASTA and multi-FASTA files. In comparison to gzip and applied to multi-FASTA files, MFCompress can provide additional average compression gains of almost 50%, i.e. it potentially doubles the available storage, although at the cost of some more computation time. On highly redundant datasets, and in comparison with gzip, 8-fold size reductions have been obtained. Both source code and binaries for several operating systems are freely available for non-commercial use at http://bioinformatics.ua.pt/software/mfcompress/.

  7. Preexisting severe cervical spinal cord compression is a significant risk factor for severe paralysis development in patients with traumatic cervical spinal cord injury without bone injury: a retrospective cohort study.

    PubMed

    Oichi, Takeshi; Oshima, Yasushi; Okazaki, Rentaro; Azuma, Seiichi

    2016-01-01

    The objective of this study is to investigate whether preexisting severe cervical spinal cord compression affects the severity of paralysis once patients develop traumatic cervical spinal cord injury (CSCI) without bone injury. We retrospectively investigated 122 consecutive patients with traumatic CSCI without bone injury. The severity of paralysis on admission was assessed by the American Spinal Injury Association impairment scale (AIS). The degree of preexisting cervical spinal cord compression was evaluated by the maximum spinal cord compression (MSCC) and was divided into three categories: minor compression (MSCC ≤ 20 %), moderate compression (20 % < MSCC ≤ 40 %), and severe compression (40 % < MSCC). We investigated soft-tissue damage on magnetic resonance imaging to estimate the external force applied. Other potential risk factors, including age, sex, fused vertebra, and ossification of longitudinal ligament, were also reviewed. A multivariate logistic regression analysis was performed to investigate the risk factors for developing severe paralysis (AIS A-C) on admission. Our study included 103 males and 19 females with mean age of 65 years. Sixty-one patients showed severe paralysis (AIS A-C) on admission. The average MSCC was 22 %. Moderate compression was observed in 41, and severe in 20. Soft-tissue damage was observed in 91. A multivariate analysis showed that severe cervical spinal cord compression significantly affected the severity of paralysis at the time of injury, whereas both mild and moderate compression did not affect it. Soft-tissue damage was also significantly associated with severe paralysis on admission. Preexisting severe cervical cord compression is an independent risk factor for severe paralysis once patients develop traumatic CSCI without bone injury.

  8. Oblivious image watermarking combined with JPEG compression

    NASA Astrophysics Data System (ADS)

    Chen, Qing; Maitre, Henri; Pesquet-Popescu, Beatrice

    2003-06-01

    For most data hiding applications, the main source of concern is the effect of lossy compression on hidden information. The objective of watermarking is fundamentally in conflict with lossy compression. The latter attempts to remove all irrelevant and redundant information from a signal, while the former uses the irrelevant information to mask the presence of hidden data. Compression on a watermarked image can significantly affect the retrieval of the watermark. Past investigations of this problem have heavily relied on simulation. It is desirable not only to measure the effect of compression on embedded watermark, but also to control the embedding process to survive lossy compression. In this paper, we focus on oblivious watermarking by assuming that the watermarked image inevitably undergoes JPEG compression prior to watermark extraction. We propose an image-adaptive watermarking scheme where the watermarking algorithm and the JPEG compression standard are jointly considered. Watermark embedding takes into consideration the JPEG compression quality factor and exploits an HVS model to adaptively attain a proper trade-off among transparency, hiding data rate, and robustness to JPEG compression. The scheme estimates the image-dependent payload under JPEG compression to achieve the watermarking bit allocation in a determinate way, while maintaining consistent watermark retrieval performance.

  9. Approximate reversibility in the context of entropy gain, information gain, and complete positivity

    NASA Astrophysics Data System (ADS)

    Buscemi, Francesco; Das, Siddhartha; Wilde, Mark M.

    2016-06-01

    There are several inequalities in physics which limit how well we can process physical systems to achieve some intended goal, including the second law of thermodynamics, entropy bounds in quantum information theory, and the uncertainty principle of quantum mechanics. Recent results provide physically meaningful enhancements of these limiting statements, determining how well one can attempt to reverse an irreversible process. In this paper, we apply and extend these results to give strong enhancements to several entropy inequalities, having to do with entropy gain, information gain, entropic disturbance, and complete positivity of open quantum systems dynamics. Our first result is a remainder term for the entropy gain of a quantum channel. This result implies that a small increase in entropy under the action of a subunital channel is a witness to the fact that the channel's adjoint can be used as a recovery map to undo the action of the original channel. We apply this result to pure-loss, quantum-limited amplifier, and phase-insensitive quantum Gaussian channels, showing how a quantum-limited amplifier can serve as a recovery from a pure-loss channel and vice versa. Our second result regards the information gain of a quantum measurement, both without and with quantum side information. We find here that a small information gain implies that it is possible to undo the action of the original measurement if it is efficient. The result also has operational ramifications for the information-theoretic tasks known as measurement compression without and with quantum side information. Our third result shows that the loss of Holevo information caused by the action of a noisy channel on an input ensemble of quantum states is small if and only if the noise can be approximately corrected on average. We finally establish that the reduced dynamics of a system-environment interaction are approximately completely positive and trace preserving if and only if the data processing

  10. Application of nonlinear models to estimate the gain of one-dimensional free-electron lasers

    NASA Astrophysics Data System (ADS)

    Peter, E.; Rizzato, F. B.; Endler, A.

    2017-06-01

    In the present work, we make use of simplified nonlinear models based on the compressibility factor (Peter et al., Phys. Plasmas, vol. 20 (12), 2013, 123104) to predict the gain of one-dimensional (1-D) free-electron lasers (FELs), considering space-charge and thermal effects. These models proved to be reasonable to estimate some aspects of 1-D FEL theory, such as the position of the onset of mixing, in the case of a initially cold electron beam, and the position of the breakdown of the laminar regime, in the case of an initially warm beam (Peter et al., Phys. Plasmas, vol. 21 (11), 2014, 113104). The results given by the models are compared to wave-particle simulations showing a reasonable agreement.

  11. Psychological factors and trimester-specific gestational weight gain: a systematic review.

    PubMed

    Kapadia, Mufiza Zia; Gaston, Anca; Van Blyderveen, Sherry; Schmidt, Louis; Beyene, Joseph; McDonald, Helen; McDonald, Sarah

    2015-01-01

    Excess gestational weight gain (GWG), which has reached epidemic proportions, is associated with numerous adverse pregnancy outcomes. Early pregnancy provides a unique opportunity for counseling pregnant women since many women are motivated to engage in healthy behaviors. A systematic review was conducted to summarize the relation between psychological factors and trimester-specific GWG, i.e. GWG measured at the end of each trimester. Eight databases were searched for affect, cognition and personality factors. The guidelines on meta-analysis of Observational Studies in Epidemiology were followed. The methodological quality of each study was assessed using a modified Newcastle-Ottawa Scale. Of 3620 non-duplicate titles and abstracts, 74 articles underwent full-text review. Two cohort studies met the inclusion criteria. Distress was negatively associated with first trimester GWG among both adolescents and non-adolescents. Body image dissatisfaction was associated with second trimester GWG only among non-adolescents. No association emerged between perceived stress, state and trait anxiety and body image dissatisfaction among adolescents and trimester-specific GWG. The relation between trimester-specific GWG and a number of weight-related and dietary-related cognitions, affective states and personality traits remain unexplored. Given the limited number of studies, further high-quality evidence is required to examine the association between psychological factors and trimester-specific GWG, especially for cognitive and personality factors.

  12. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  13. Protective effect of caspase inhibition on compression-induced muscle damage

    PubMed Central

    Teng, Bee T; Tam, Eric W; Benzie, Iris F; Siu, Parco M

    2011-01-01

    Abstract There are currently no effective therapies for treating pressure-induced deep tissue injury. This study tested the efficacy of pharmacological inhibition of caspase in preventing muscle damage following sustained moderate compression. Adult Sprague–Dawley rats were subjected to prolonged moderate compression. Static pressure of 100 mmHg compression was applied to an area of 1.5 cm2 in the tibialis region of the right limb of the rats for 6 h each day for two consecutive days. The left uncompressed limb served as intra-animal control. Rats were randomized to receive either vehicle (DMSO) as control treatment (n = 8) or 6 mg kg−1 of caspase inhibitor (z-VAD-fmk; n = 8) prior to the 6 h compression on the two consecutive days. Muscle tissues directly underneath the compression region of the compressed limb and the same region of control limb were harvested after the compression procedure. Histological examination and biochemical/molecular measurement of apoptosis and autophagy were performed. Caspase inhibition was effective in alleviating the compression-induced pathohistology of muscle. The increases in caspase-3 protease activity, TUNEL index, apoptotic DNA fragmentation and pro-apoptotic factors (Bax, p53 and EndoG) and the decreases in anti-apoptotic factors (XIAP and HSP70) observed in compressed muscle of DMSO-treated animals were not found in animals treated with caspase inhibitor. The mRNA content of autophagic factors (Beclin-1, Atg5 and Atg12) and the protein content of LC3, FoxO3 and phospho-FoxO3 that were down-regulated in compressed muscle of DMSO-treated animals were all maintained at their basal level in the caspase inhibitor treated animals. Our data provide evidence that caspase inhibition attenuates compression-induced muscle apoptosis and maintains the basal autophagy level. These findings demonstrate that pharmacological inhibition of caspase/apoptosis is effective in alleviating muscle damage as induced by prolonged compression

  14. The Instructional Effects of Diagrams and Time-Compressed Instruction on Student Achievement and Learners' Perceptions of Cognitive Load

    ERIC Educational Resources Information Center

    Pastore, Raymond S.

    2009-01-01

    The purpose of this study was to examine the effects of visual representations and time-compressed instruction on learning and learners' perceptions of cognitive load. Time-compressed instruction refers to instruction that has been increased in speed without sacrificing quality. It was anticipated that learners would be able to gain a conceptual…

  15. Comparative performance between compressed and uncompressed airborne imagery

    NASA Astrophysics Data System (ADS)

    Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh

    2008-04-01

    The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.

  16. SCALCE: boosting sequence compression algorithms using locally consistent encoding

    PubMed Central

    Hach, Faraz; Numanagić, Ibrahim; Sahinalp, S Cenk

    2012-01-01

    Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a ‘boosting’ scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19—when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for

  17. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    PubMed

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip

  18. Factors affecting weight gain and dietary intake in Latino males residing in Mississippi: A preliminary study

    USDA-ARS?s Scientific Manuscript database

    Research indicates that as Latinos become more acculturated to the United States, their diet changes and they experience weight gain. There is also a high incidence of depression in this population. The purpose of this preliminary study was to examine the correlations between sociodemographic factor...

  19. Cosmological Particle Data Compression in Practice

    NASA Astrophysics Data System (ADS)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  20. Beam steering performance of compressed Luneburg lens based on transformation optics

    NASA Astrophysics Data System (ADS)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  1. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  2. Electron core ionization in compressed alkali metal cesium

    NASA Astrophysics Data System (ADS)

    Degtyareva, V. F.

    2018-01-01

    Elements of groups I and II in the periodic table have valence electrons of s-type and are usually considered as simple metals. Crystal structures of these elements at ambient pressure are close-packed and high-symmetry of bcc and fcc-types, defined by electrostatic (Madelung) energy. Diverse structures were found under high pressure with decrease of the coordination number, packing fraction and symmetry. Formation of complex structures can be understood within the model of Fermi sphere-Brillouin zone interactions and supported by Hume-Rothery arguments. With the volume decrease there is a gain of band structure energy accompanied by a formation of many-faced Brillouin zone polyhedra. Under compression to less than a half of the initial volume the interatomic distances become close to or smaller than the ionic radius which should lead to the electron core ionization. At strong compression it is necessary to assume that for alkali metals the valence electron band overlaps with the upper core electrons, which increases the valence electron count under compression.

  3. Factors that Prevent Children from Gaining Access to Schooling: A Study of Delhi Slum Households

    ERIC Educational Resources Information Center

    Tsujita, Yuko

    2013-01-01

    This paper examines the factors that prevent slum children aged 5-14 from gaining access to schooling in light of the worsening urban poverty and sizable increase in rural-to-urban migration. Bias against social disadvantage in terms of gender and caste is not clearly manifested in schooling, while migrated children are less likely to attend…

  4. Stokes Profile Compression Applied to VSM Data

    NASA Astrophysics Data System (ADS)

    Toussaint, W. A.; Henney, C. J.; Harvey, J. W.

    2012-02-01

    The practical details of applying the Expansion in Hermite Functions (EHF) method to compression of full-disk full-Stokes solar spectroscopic data from the SOLIS/VSM instrument are discussed in this paper. The algorithm developed and discussed here preserves the 630.15 and 630.25 nm Fe i lines, along with the local continuum and telluric lines. This compression greatly reduces the amount of space required to store these data sets while maintaining the quality of the data, allowing these observations to be archived and made publicly available with limited bandwidth. Applying EHF to the full-Stokes profiles and saving the coefficient files with Rice compression reduces the disk space required to store these observations by a factor of 20, while maintaining the quality of the data and with a total compression time only 35% slower than the standard gzip (GNU zip) compression.

  5. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  6. Modeling Two-Stage Bunch Compression With Wakefields: Macroscopic Properties And Microbunching Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch, R.A.; Kleman, K.J.; /Wisconsin U., SRC

    2011-09-08

    In a two-stage compression and acceleration system, where each stage compresses a chirped bunch in a magnetic chicane, wakefields affect high-current bunches. The longitudinal wakes affect the macroscopic energy and current profiles of the compressed bunch and cause microbunching at short wavelengths. For macroscopic wavelengths, impedance formulas and tracking simulations show that the wakefields can be dominated by the resistive impedance of coherent edge radiation. For this case, we calculate the minimum initial bunch length that can be compressed without producing an upright tail in phase space and associated current spike. Formulas are also obtained for the jitter in themore » bunch arrival time downstream of the compressors that results from the bunch-to-bunch variation of current, energy, and chirp. Microbunching may occur at short wavelengths where the longitudinal space-charge wakes dominate or at longer wavelengths dominated by edge radiation. We model this range of wavelengths with frequency-dependent impedance before and after each stage of compression. The growth of current and energy modulations is described by analytic gain formulas that agree with simulations.« less

  7. CNES studies for on-board implementation via HLS tools of a cloud-detection module for selective compression

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Thiebaut, C.; Dejean, Ph.; Speciel, A.

    2010-08-01

    Future CNES high resolution instruments for remote sensing missions will lead to higher data-rates because of the increase in resolution and dynamic range. For example, the ground resolution improvement has induced a data-rate multiplied by 8 from SPOT4 to SPOT5 [1] and by 28 to PLEIADES-HR [2]. Innovative "smart" compression techniques will be then required, performing different types of compression inside a scene, in order to reach higher global compression ratios while complying with image quality requirements. This socalled "selective compression", allows important compression gains by detecting and then differently compressing the regions-of-interest (ROI) and non-interest in the image (e.g. higher compression ratios are assigned to the non-interesting data). Given that most of CNES high resolution images are cloudy [1], significant mass-memory and transmission gain could be reached by just detecting and suppressing (or compressing significantly) the areas covered by clouds. Since 2007, CNES works on a cloud detection module [3] as a simplification for on-board implementation of an already existing module used on-ground for PLEIADES-HR album images [4]. The different steps of this Support Vector Machine classifier have already been analyzed, for simplification and optimization, during this on-board implementation study: reflectance computation, characteristics vector computation (based on multispectral criteria) and computation of the SVM output. In order to speed up the hardware design phase, a new approach based on HLS [5] tools is being tested for the VHDL description stage. The aim is to obtain a bit-true VDHL design directly from a high level description language as C or Matlab/Simulink [6].

  8. iDoComp: a compression scheme for assembled genomes

    PubMed Central

    Ochoa, Idoia; Hernaez, Mikel; Weissman, Tsachy

    2015-01-01

    Motivation: With the release of the latest next-generation sequencing (NGS) machine, the HiSeq X by Illumina, the cost of sequencing a Human has dropped to a mere $4000. Thus we are approaching a milestone in the sequencing history, known as the $1000 genome era, where the sequencing of individuals is affordable, opening the doors to effective personalized medicine. Massive generation of genomic data, including assembled genomes, is expected in the following years. There is crucial need for compression of genomes guaranteed of performing well simultaneously on different species, from simple bacteria to humans, which will ease their transmission, dissemination and analysis. Further, most of the new genomes to be compressed will correspond to individuals of a species from which a reference already exists on the database. Thus, it is natural to propose compression schemes that assume and exploit the availability of such references. Results: We propose iDoComp, a compressor of assembled genomes presented in FASTA format that compresses an individual genome using a reference genome for both the compression and the decompression. In terms of compression efficiency, iDoComp outperforms previously proposed algorithms in most of the studied cases, with comparable or better running time. For example, we observe compression gains of up to 60% in several cases, including H.sapiens data, when comparing with the best compression performance among the previously proposed algorithms. Availability: iDoComp is written in C and can be downloaded from: http://www.stanford.edu/~iochoa/iDoComp.html (We also provide a full explanation on how to run the program and an example with all the necessary files to run it.). Contact: iochoa@stanford.edu Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:25344501

  9. Direct compression of chitosan: process and formulation factors to improve powder flow and tablet performance.

    PubMed

    Buys, Gerhard M; du Plessis, Lissinda H; Marais, Andries F; Kotze, Awie F; Hamman, Josias H

    2013-06-01

    Chitosan is a polymer derived from chitin that is widely available at relatively low cost, but due to compression challenges it has limited application for the production of direct compression tablets. The aim of this study was to use certain process and formulation variables to improve manufacturing of tablets containing chitosan as bulking agent. Chitosan particle size and flow properties were determined, which included bulk density, tapped density, compressibility and moisture uptake. The effect of process variables (i.e. compression force, punch depth, percentage compaction in a novel double fill compression process) and formulation variables (i.e. type of glidant, citric acid, pectin, coating with Eudragit S®) on chitosan tablet performance (i.e. mass variation, tensile strength, dissolution) was investigated. Moisture content of the chitosan powder, particle size and the inclusion of glidants had a pronounced effect on its flow ability. Varying the percentage compaction during the first cycle of a double fill compression process produced chitosan tablets with more acceptable tensile strength and dissolution rate properties. The inclusion of citric acid and pectin into the formulation significantly decreased the dissolution rate of isoniazid from the tablets due to gel formation. Direct compression of chitosan powder into tablets can be significantly improved by the investigated process and formulation variables as well as applying a double fill compression process.

  10. Study of factors influencing the mechanical properties of polyurethane foams under dynamic compression

    NASA Astrophysics Data System (ADS)

    Linul, E.; Marsavina, L.; Voiconi, T.; Sadowski, T.

    2013-07-01

    Effect of density, loading rate, material orientation and temperature on dynamic compression behavior of rigid polyurethane foams are investigated in this paper. These parameters have a very important role, taking into account that foams are used as packing materials or dampers which require high energy impact absorption. The experimental study was carried out on closed-cell rigid polyurethane (PUR) foam specimens of different densities (100, 160 respectively 300 kg/m3), having a cubic shape. The specimens were subjected to uniaxial dynamic compression with loading rate in range of 1.37-3.25 m/s, using four different temperatures (20, 60, 90, 110°C) and two loading planes (direction (3) - rise direction and direction (2) - in plane). Experimental results show that Young's modulus, yield stress and plateau stress values increases with increasing density. One of the most significant effects of mechanical properties in dynamic compression of rigid PUR foams is the density, but also the loading speed, material orientation and temperature influences the behavior in compression

  11. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  13. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  14. The effect of compression on individual pressure vessel nickel/hydrogen components

    NASA Technical Reports Server (NTRS)

    Manzo, Michelle A.; Perez-Davis, Marla E.

    1988-01-01

    Compression tests were performed on representative Individual Pressure Vessel (IPV) Nickel/Hydrogen cell components in an effort to better understand the effects of force on component compression and the interactions of components under compression. It appears that the separator is the most easily compressed of all of the stack components. It will typically partially compress before any of the other components begin to compress. The compression characteristics of the cell components in assembly differed considerably from what would be predicted based on individual compression characteristics. Component interactions played a significant role in the stack response to compression. The results of the compression tests were factored into the design and selection of Belleville washers added to the cell stack to accommodate nickel electrode expansion while keeping the pressure on the stack within a reasonable range of the original preset.

  15. Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Rao, Xiongbin; Lau, Vincent K. N.

    2014-06-01

    To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.

  16. Predictive Factors for Vision Recovery after Optic Nerve Decompression for Chronic Compressive Neuropathy: Systematic Review and Meta-Analysis

    PubMed Central

    Carlson, Andrew P.; Stippler, Martina; Myers, Orrin

    2012-01-01

    Objectives Surgical optic nerve decompression for chronic compressive neuropathy results in variable success of vision improvement. We sought to determine the effects of various factors using meta-analysis of available literature. Design Systematic review of MEDLINE databases for the period 1990 to 2010. Setting Academic research center. Participants Studies reporting patients with vision loss from chronic compressive neuropathy undergoing surgery. Main outcome measures Vision outcome reported by each study. Odds ratios (ORs) and 95% confidence intervals (CIs) for predictor variables were calculated. Overall odds ratios were then calculated for each factor, adjusting for inter study heterogeneity. Results Seventy-six studies were identified. Factors with a significant odds of improvement were: less severe vision loss (OR 2.31[95% CI = 1.76 to 3.04]), no disc atrophy (OR 2.60 [95% CI = 1.17 to 5.81]), smaller size (OR 1.82 [95% CI = 1.22 to 2.73]), primary tumor resection (not recurrent) (OR 3.08 [95% CI = 1.84 to 5.14]), no cavernous sinus extension (OR 1.88 [95% CI = 1.03 to 3.43]), soft consistency (OR 4.91 [95% CI = 2.27 to 10.63]), presence of arachnoid plane (OR 5.60 [95% CI = 2.08 to 15.07]), and more extensive resection (OR 0.61 [95% CI = 0.4 to 0.93]). Conclusions Ophthalmologic factors and factors directly related to the lesion are most important in determining vision outcome. The decision to perform optic nerve decompression for vision loss should be made based on careful examination of the patient and realistic discussion regarding the probability of improvement. PMID:24436885

  17. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  18. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  19. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  20. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  1. Is Weight Gain after Smoking Cessation Inevitable?

    ERIC Educational Resources Information Center

    Talcott, Gerald W.; And Others

    1995-01-01

    Studied weight gain after smoking cessation in a naturalistic setting where all smokers quit and risk factors for postcessation weight gain were modified. Results showed no significant weight changes for smokers who quit. Suggests that an intensive program featuring dietary guidelines and increased physical activity can attenuate weight gain. (RJM)

  2. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  3. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  4. Shear wave pulse compression for dynamic elastography using phase-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Nguyen, Thu-Mai; Song, Shaozhen; Arnal, Bastien; Wong, Emily Y.; Huang, Zhihong; Wang, Ruikang K.; O'Donnell, Matthew

    2014-01-01

    Assessing the biomechanical properties of soft tissue provides clinically valuable information to supplement conventional structural imaging. In the previous studies, we introduced a dynamic elastography technique based on phase-sensitive optical coherence tomography (PhS-OCT) to characterize submillimetric structures such as skin layers or ocular tissues. Here, we propose to implement a pulse compression technique for shear wave elastography. We performed shear wave pulse compression in tissue-mimicking phantoms. Using a mechanical actuator to generate broadband frequency-modulated vibrations (1 to 5 kHz), induced displacements were detected at an equivalent frame rate of 47 kHz using a PhS-OCT. The recorded signal was digitally compressed to a broadband pulse. Stiffness maps were then reconstructed from spatially localized estimates of the local shear wave speed. We demonstrate that a simple pulse compression scheme can increase shear wave detection signal-to-noise ratio (>12 dB gain) and reduce artifacts in reconstructing stiffness maps of heterogeneous media.

  5. Boundary conditions for the solution of compressible Navier-Stokes equations by an implicit factored method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Smith, G. E.; Springer, G. S.; Rimon, Y.

    1983-01-01

    A method is presented for formulating the boundary conditions in implicit finite-difference form needed for obtaining solutions to the compressible Navier-Stokes equations by the Beam and Warming implicit factored method. The usefulness of the method was demonstrated (a) by establishing the boundary conditions applicable to the analysis of the flow inside an axisymmetric piston-cylinder configuration and (b) by calculating velocities and mass fractions inside the cylinder for different geometries and different operating conditions. Stability, selection of time step and grid sizes, and computer time requirements are discussed in reference to the piston-cylinder problem analyzed.

  6. A 1-channel 3-band wide dynamic range compression chip for vibration transducer of implantable hearing aids.

    PubMed

    Kim, Dongwook; Seong, Kiwoong; Kim, Myoungnam; Cho, Jinho; Lee, Jyunghyun

    2014-01-01

    In this paper, a digital audio processing chip which uses a wide dynamic range compression (WDRC) algorithm is designed and implemented for implantable hearing aids system. The designed chip operates at a single voltage of 3.3V and drives a 16 bit parallel input and output at 32 kHz sample. The designed chip has 1-channel 3-band WDRC composed of a FIR filter bank, a level detector, and a compression part. To verify the performance of the designed chip, we measured the frequency separations of bands and compression gain control to reflect the hearing threshold level.

  7. Determination of the effective refractive index spectrum of a quantum-well semiconductor laser diode from the measured modal gain spectrum

    NASA Astrophysics Data System (ADS)

    Wu, Linzhang; Tian, Wei; Gao, Feng

    2004-09-01

    This paper presents a self-consistent method to directly determine the effective refractive-index spectrum of a semiconductor quantum-well (QW) laser diode from the measured modal gain spectrum for a given current. The dispersion spectra of the optical waveguide confinement factor and the strongly carrier-density-dependent refractive index of the QW active layer of the test laser are also accurately obtained. The experimental result from a single QW GaInP/AlGaInP laser diode, which has 6 nm thick compressively strained Ga0.4InP active layer sandwiched by two 80 nm thick Al0.33GaInP, is presented.

  8. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  9. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  10. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  11. Higher Pre-pregnancy BMI and Excessive Gestational Weight Gain are Risk Factors for Rapid Weight Gain in Infants.

    PubMed

    Subhan, Fatheema Begum; Colman, Ian; McCargar, Linda; Bell, Rhonda C

    2017-06-01

    Objective To describe the effects of maternal pre-pregnancy body mass index (BMI) and gestational weight gain (GWG) on infant anthropometrics at birth and 3 months and infant growth rates between birth and 3 months. Methods Body weight prior to and during pregnancy and infant weight and length at birth and 3 months were collected from 600 mother-infant pairs. Adherence to GWG was based on IOM recommendations. Age and sex specific z-scores were calculated for infant weight and length at birth and 3 months. Rapid postnatal growth was defined as a difference of >0.67 in weight-for-age z-score between birth and 3 months. Relationships between maternal and infant characteristics were analysed using multilinear regression. Results Most women (65%) had a normal pre-pregnancy BMI and 57% gained above GWG recommendations. Infants were 39.3 ± 1.2 weeks and 3431 ± 447.9 g at birth. At 3 months postpartum 60% were exclusively breast fed while 38% received breast milk and formula. Having a pre-pregnancy BMI >25 kg/m 2 was associated with higher z-scores for birth weight and weight-for-age at 3 months. Gaining above recommendations was associated with higher z-scores for birth weight, weight-for-age and BMI. Infants who experienced rapid postnatal growth had higher odds of being born to women who gained above recommendations. Conclusion for Practice Excessive GWG is associated with higher birth weight and rapid weight gain in infants. Interventions that optimize GWG should explore effects on total and rates of early infant growth.

  12. [Antipsychotic-induced weight gain--pharmacogenetic studies].

    PubMed

    Olajossy-Hilkesberger, Luiza; Godlewska, Beata; Marmurowska-Michałowskal, Halina; Olajossy, Marcin; Landowski, Jerzy

    2006-01-01

    Drug-naive patients with schizophrenia often present metabolic abnormalities and obesity. Weight gain may be the side effect of treatment with many antipsychotic drugs. Genetic effects, besides many other factors, are known to influence obesity in patients with schizophrenia treated with antipsychotics. Numerous studies of several genes' polymorphisms have been performed. -759C/T polymorphism of 5HT2C gene attracted most attention. In 5 independent studies of this polymorphism the association between T allele with the lower AP-induced weight gain was detected. No associations could be detected between weight gain and other polymorphisms of serotonergic system genes as well as histaminergic system genes. Studies of adrenergic and dopaminergic system have neither produced any unambiguous results. Analysis of the newest candidate genes (SAP-25, leptin gene) confirmed the role of genetic factors in AP-induced weight gain. It is worth emphasising, that the studies have been conducted in relatively small and heterogenic groups and that various treatment strategies were used.

  13. Numerical study on the maximum small-signal gain coefficient in passively mode-locked fiber lasers

    NASA Astrophysics Data System (ADS)

    Tang, Xin; Wang, Jian; Chen, Zhaoyang; Lin, Chengyou; Ding, Yingchun

    2017-06-01

    Ultrashort pulses have been found to have important applications in many fields, such as ultrafast diagnosis, biomedical engineering, and optical imaging. Passively mode-locked fiber lasers have become a tool for generating picosecond and femtosecond pulses. In this paper, the evolution of a picosecond laser pulse in different stable passively mode-locked fiber laser is analyzed using nonlinear Schrödinger equation. Firstly, different mode-locked regimes are calculated with different net cavity dispersion (from -0.3 ps2 to +0.3 ps2 ). Then we calculate the maximum small-signal gain on the different net cavity dispersion conditions, and estimate the pulse width, 3 dB bandwidth and time bandwidth product (TBP) when the small-signal gain coefficient is selected as the maximum value. The results show that the small signal gain coefficient is approximately proportional to the net cavity. Moreover, when the small signal gain coefficient reaches the maximum value, the pulse width of the output pulse and their corresponding TBP show a trend of increase gradually, and 3dB bandwidth shows a trend of increase firstly and then decrease. In addition, in the case that the net dispersion is positive, because of the pulse with quite large frequency chirp, the revolution to dechirp the pulse is researched and the output of the pulse is compressed and its compression ratio reached more than 10 times. The results provide a reference for the optimization of passively mode-locked fiber lasers.

  14. Pulse compression using a tapered microstructure optical fiber.

    PubMed

    Hu, Jonathan; Marks, Brian S; Menyuk, Curtis R; Kim, Jinchae; Carruthers, Thomas F; Wright, Barbara M; Taunay, Thierry F; Friebele, E J

    2006-05-01

    We calculate the pulse compression in a tapered microstructure optical fiber with four layers of holes. We show that the primary limitation on pulse compression is the loss due to mode leakage. As a fiber's diameter decreases due to the tapering, so does the air-hole diameter, and at a sufficiently small diameter the guided mode loss becomes unacceptably high. For the four-layer geometry we considered, a compression factor of 10 can be achieved by a pulse with an initial FWHM duration of 3 ps in a tapered fiber that is 28 m long. We find that there is little difference in the pulse compression between a linear taper profile and a Gaussian taper profile. More layers of air-holes allows the pitch to decrease considerably before losses become unacceptable, but only a moderate increase in the degree of pulse compression is obtained.

  15. Issues with Strong Compression of Plasma Target by Stabilized Imploding Liner

    NASA Astrophysics Data System (ADS)

    Turchi, Peter; Frese, Sherry; Frese, Michael

    2017-10-01

    Strong compression (10:1 in radius) of an FRC by imploding liquid metal liners, stabilized against Rayleigh-Taylor modes, using different scalings for loss based on Bohm vs 100X classical diffusion rates, predict useful compressions with implosion times half the initial energy lifetime. The elongation (length-to-diameter ratio) near peak compression needed to satisfy empirical stability criterion and also retain alpha-particles is about ten. The present paper extends these considerations to issues of the initial FRC, including stability conditions (S*/E) and allowable angular speeds. Furthermore, efficient recovery of the implosion energy and alpha-particle work, in order to reduce the necessary nuclear gain for an economical power reactor, is seen as an important element of the stabilized liner implosion concept for fusion. We describe recent progress in design and construction of the high energy-density prototype of a Stabilized Liner Compressor (SLC) leading to repetitive laboratory experiments to develop the plasma target. Supported by ARPA-E ALPHA Program.

  16. SAR data compression: Application, requirements, and designs

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    The feasibility of reducing data volume and data rate is evaluated for the Earth Observing System (EOS) Synthetic Aperture Radar (SAR). All elements of data stream from the sensor downlink data stream to electronic delivery of browse data products are explored. The factors influencing design of a data compression system are analyzed, including the signal data characteristics, the image quality requirements, and the throughput requirements. The conclusion is that little or no reduction can be achieved in the raw signal data using traditional data compression techniques (e.g., vector quantization, adaptive discrete cosine transform) due to the induced phase errors in the output image. However, after image formation, a number of techniques are effective for data compression.

  17. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  18. Optimum SNR data compression in hardware using an Eigencoil array.

    PubMed

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  19. Pulse compression at 1.06 μm in dispersion-decreasing holey fibers

    NASA Astrophysics Data System (ADS)

    Tse, M. L. V.; Horak, P.; Price, J. H. V.; Poletti, F.; He, F.; Richardson, D. J.

    2006-12-01

    We report compression of low-power femtosecond pulses at 1.06 μm in a dispersion-decreasing holey fiber. Near-adiabatic compression of 130 fs pulses down to 60 fs has been observed. Measured spectra and pulse shapes agree well with numerical simulations. Compression factors of ten are possible in optimized fibers.

  20. CURRENT CONCEPTS AND TREATMENT OF PATELLOFEMORAL COMPRESSIVE ISSUES.

    PubMed

    Mullaney, Michael J; Fukunaga, Takumi

    2016-12-01

    Patellofemoral disorders, commonly encountered in sports and orthopedic rehabilitation settings, may result from dysfunction in patellofemoral joint compression. Osseous and soft tissue factors, as well as the mechanical interaction of the two, contribute to increased patellofemoral compression and pain. Treatment of patellofemoral compressive issues is based on identification of contributory impairments. Use of reliable tests and measures is essential in detecting impairments in hip flexor, quadriceps, iliotibial band, hamstrings, and gastrocnemius flexibility, as well as in joint mobility, myofascial restrictions, and proximal muscle weakness. Once relevant impairments are identified, a combination of manual techniques, instrument-assisted methods, and therapeutic exercises are used to address the impairments and promote functional improvements. The purpose of this clinical commentary is to describe the clinical presentation, contributory considerations, and interventions to address patellofemoral joint compressive issues.

  1. CURRENT CONCEPTS AND TREATMENT OF PATELLOFEMORAL COMPRESSIVE ISSUES

    PubMed Central

    Fukunaga, Takumi

    2016-01-01

    Patellofemoral disorders, commonly encountered in sports and orthopedic rehabilitation settings, may result from dysfunction in patellofemoral joint compression. Osseous and soft tissue factors, as well as the mechanical interaction of the two, contribute to increased patellofemoral compression and pain. Treatment of patellofemoral compressive issues is based on identification of contributory impairments. Use of reliable tests and measures is essential in detecting impairments in hip flexor, quadriceps, iliotibial band, hamstrings, and gastrocnemius flexibility, as well as in joint mobility, myofascial restrictions, and proximal muscle weakness. Once relevant impairments are identified, a combination of manual techniques, instrument-assisted methods, and therapeutic exercises are used to address the impairments and promote functional improvements. The purpose of this clinical commentary is to describe the clinical presentation, contributory considerations, and interventions to address patellofemoral joint compressive issues. PMID:27904792

  2. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  3. Potential capabilities for compression of information of certain data processing systems

    NASA Technical Reports Server (NTRS)

    Khodarev, Y. K.; Yevdokimov, V. P.; Pokras, V. M.

    1974-01-01

    This article undertakes to study a generalized block diagram of a data collection and processing system of a spacecraft in which a number of sensors or outputs of scientific instruments are cyclically interrogated by a commutator, methods of writing the supplementary information in a frame on the example of a certain hypothetical telemetry system, and the influence of statistics of number of active channels in a frame on frame compression factor. The separation of the data compression factor of the collection and processing system of spacecraft into two parts used in this work allows determination of the compression factor of an active frame depending not only on the statistics of activity of channels in the telemetry frame, but also on the method of introduction of the additional address and time information to each frame.

  4. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  5. Radiological Image Compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  6. Compression in Working Memory and Its Relationship With Fluid Intelligence.

    PubMed

    Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien

    2018-06-01

    Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between working-memory capacity and fluid intelligence because both depend on the optimization of storage capacity. Compressibility of memoranda was estimated using an algorithmic complexity metric. The results showed that compressibility can be used to predict working-memory performance and that fluid intelligence is well predicted by the ability to compress information. We conclude that the ability to compress information in working memory is the reason why both manipulation and retention of information are linked to intelligence. This result offers a new concept of intelligence based on the idea that compression and intelligence are equivalent problems. Copyright © 2018 Cognitive Science Society, Inc.

  7. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  8. Enabling Near Real-Time Remote Search for Fast Transient Events with Lossy Data Compression

    NASA Astrophysics Data System (ADS)

    Vohl, Dany; Pritchard, Tyler; Andreoni, Igor; Cooke, Jeffrey; Meade, Bernard

    2017-09-01

    We present a systematic evaluation of JPEG2000 (ISO/IEC 15444) as a transport data format to enable rapid remote searches for fast transient events as part of the Deeper Wider Faster programme. Deeper Wider Faster programme uses 20 telescopes from radio to gamma rays to perform simultaneous and rapid-response follow-up searches for fast transient events on millisecond-to-hours timescales. Deeper Wider Faster programme search demands have a set of constraints that is becoming common amongst large collaborations. Here, we focus on the rapid optical data component of Deeper Wider Faster programme led by the Dark Energy Camera at Cerro Tololo Inter-American Observatory. Each Dark Energy Camera image has 70 total coupled-charged devices saved as a 1.2 gigabyte FITS file. Near real-time data processing and fast transient candidate identifications-in minutes for rapid follow-up triggers on other telescopes-requires computational power exceeding what is currently available on-site at Cerro Tololo Inter-American Observatory. In this context, data files need to be transmitted rapidly to a foreign location for supercomputing post-processing, source finding, visualisation and analysis. This step in the search process poses a major bottleneck, and reducing the data size helps accommodate faster data transmission. To maximise our gain in transfer time and still achieve our science goals, we opt for lossy data compression-keeping in mind that raw data is archived and can be evaluated at a later time. We evaluate how lossy JPEG2000 compression affects the process of finding transients, and find only a negligible effect for compression ratios up to 25:1. We also find a linear relation between compression ratio and the mean estimated data transmission speed-up factor. Adding highly customised compression and decompression steps to the science pipeline considerably reduces the transmission time-validating its introduction to the Deeper Wider Faster programme science pipeline and

  9. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  10. Comparison of reversible methods for data compression

    NASA Astrophysics Data System (ADS)

    Heer, Volker K.; Reinfelder, Hans-Erich

    1990-07-01

    Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.

  11. Nonlinear combining and compression in multicore fibers

    DOE PAGES

    Chekhovskoy, I. S.; Rubenchik, A. M.; Shtyrina, O. V.; ...

    2016-10-25

    In this paper, we demonstrate numerically light-pulse combining and pulse compression using wave-collapse (self-focusing) energy-localization dynamics in a continuous-discrete nonlinear system, as implemented in a multicore fiber (MCF) using one-dimensional (1D) and 2D core distribution designs. Large-scale numerical simulations were performed to determine the conditions of the most efficient coherent combining and compression of pulses injected into the considered MCFs. We demonstrate the possibility of combining in a single core 90% of the total energy of pulses initially injected into all cores of a 7-core MCF with a hexagonal lattice. Finally, a pulse compression factor of about 720 can bemore » obtained with a 19-core ring MCF.« less

  12. Planning/scheduling techniques for VQ-based image compression

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.

    1994-01-01

    The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.

  13. Pressure-induced structural change in liquid GaIn eutectic alloy.

    PubMed

    Yu, Q; Ahmad, A S; Ståhl, K; Wang, X D; Su, Y; Glazyrin, K; Liermann, H P; Franz, H; Cao, Q P; Zhang, D X; Jiang, J Z

    2017-04-25

    Synchrotron x-ray diffraction reveals a pressure induced crystallization at about 3.4 GPa and a polymorphic transition near 10.3 GPa when compressed a liquid GaIn eutectic alloy up to ~13 GPa at room temperature in a diamond anvil cell. Upon decompression, the high pressure crystalline phase remains almost unchanged until it transforms to the liquid state at around 2.3 GPa. The ab initio molecular dynamics calculations can reproduce the low pressure crystallization and give some hints on the understanding of the transition between the liquid and the crystalline phase on the atomic level. The calculated pair correlation function g(r) shows a non-uniform contraction reflected by the different compressibility between the short (1st shell) and the intermediate (2nd to 4th shells). It is concluded that the pressure-induced liquid-crystalline phase transformation likely arises from the changes in local atomic packing of the nearest neighbors as well as electronic structures at the transition pressure.

  14. Qualitative analysis of gain spectra of InGaAlAs/InP lasing nano-heterostructure

    NASA Astrophysics Data System (ADS)

    Lal, Pyare; Yadav, Rashmi; Sharma, Meha; Rahman, F.; Dalela, S.; Alvi, P. A.

    2014-08-01

    This paper deals with the studies of lasing characteristics along with the gain spectra of compressively strained and step SCH based In0.71Ga0.21Al0.08As/InP lasing nano-heterostructure within TE polarization mode, taking into account the variation in well width of the single quantum well of the nano-heterostructure. In addition, the compressive conduction and valence bands dispersion profiles for quantum well of the material composition In0.71Ga0.21Al0.08As at temperature 300 K and strain 1.12% have been studied using 4 × 4 Luttinger Hamiltonian. For the proposed nano-heterostructure, the quantum well width dependence of differential gain, refractive index change and relaxation oscillation frequency with current density have been studied. Moreover, the G-J characteristics of the nano-heterostructure at different well widths have also been investigated, that provided significant information about threshold current density, threshold gain and transparency current density. The results obtained in the study of nano-heterostructure suggest that the gain and relaxation oscillation frequency both are decreased with increasing quantum well width but the required lasing wavelength is found to shift towards higher values. On behalf of qualitative analysis of the structure, the well width of 6 nm is found more suitable for lasing action at the wavelength of 1.55 μm due to minimum optical attenuation and minimum dispersion within the waveguide. The results achieved are, therefore, very important in the emerging area of nano-optoelectronics.

  15. Maternal obesity and gestational weight gain are risk factors for infant death

    PubMed Central

    Bodnar, Lisa M.; Siminerio, Lara L.; Himes, Katherine P.; Hutcheon, Jennifer A.; Lash, Timothy L.; Parisi, Sara M.; Abrams, Barbara

    2015-01-01

    Objective To assess the joint and independent relationships of gestational weight gain and prepregnancy body mass index (BMI) on risk of infant mortality. Methods We used Pennsylvania linked birth-infant death records (2003–2011) from infants without anomalies to underweight (n=58,973), normal weight (n=610,118), overweight (n=296,630), grade 1 obese (n=147,608), grade 2 obese (n=71,740), and grade 3 obese (n=47,277) mothers. Multivariable logistic regression models stratified by BMI category were used to estimate dose-response associations between z-scores of gestational weight gain and infant death after confounder adjustment. Results Infant mortality risk was lowest among normal weight women and increased with rising BMI category. For all BMI groups except for grade 3 obesity, there were U-shaped associations between gestational weight gain and risk of infant death. Weight loss and very low weight gain among women with grade 1 and 2 obesity were associated with high risks of infant mortality. However, even when gestational weight gain in women with obesity was optimized, the predicted risk of infant death remained higher than that of normal weight women. Conclusions Interventions aimed at substantially reducing preconception weight among women with obesity and avoiding very low or very high gestational weight gain may reduce risk of infant death. PMID:26572932

  16. Efficient genotype compression and analysis of large genetic variation datasets

    PubMed Central

    Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.

    2015-01-01

    Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772

  17. Personality type influence the gestational weight gain.

    PubMed

    Franik, Grzegorz; Lipka, Nela; Kopyto, Katarzyna; Kopocińska, Joanna; Owczarek, Aleksander; Sikora, Jerzy; Madej, Paweł; Chudek, Jerzy; Olszanecka-Glinianowicz, Magdalena

    2017-08-01

    Pregnancy is frequently followed by the development of obesity. Aside from psychological factors, hormonal changes influence weight gain in pregnant women. We attempted to assess the potential association between personality type and the extent of gestational weight gain. The study group involved 773 women after term delivery (age 26.3 ± 3.9 years, body mass before pregnancy 61.2 ± 11.1 kg). Weight gain during pregnancy was calculated by using self-reported body mass prior to and during the 38th week of pregnancy. Personality type was assessed using the Polish version of the Framingham Type A Behavior Patterns Questionnaire (adapted by Juczynski). Two hundred forty-six (31.8%) study subjects represented type A personalities, 272 (35.2%) type B and 255 (33.0%) an indirect type. Gestational weight gain was related to the behavior patterns questionnaire score and age. In women <30 years with type A personality, the weight gain was higher than in women with type B behavior of the same age. In women >30, the gestational weight gain was larger for type B personalities. Type A personality and increased urgency in younger pregnant women increases the risk of developing obesity during pregnancy in women below 30 years old. A higher level of competitiveness demonstrates a risk factor of excessive weight gain during pregnancy regardless of age.

  18. Estimating JPEG2000 compression for image forensics using Benford's Law

    NASA Astrophysics Data System (ADS)

    Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.

    2010-05-01

    With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result

  19. Compressed NMR: Combining compressive sampling and pure shift NMR techniques.

    PubMed

    Aguilar, Juan A; Kenwright, Alan M

    2017-12-26

    Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Psychosocial working conditions and weight gain among employees.

    PubMed

    Lallukka, T; Laaksonen, M; Martikainen, P; Sarlio-Lähteenkorva, S; Lahelma, E

    2005-08-01

    To study the associations between psychosocial working conditions and weight gain. Data from postal questionnaires (response rate 67%) sent to 40- to 60-y-old women (n=7093) and men (n=1799) employed by the City of Helsinki in 2000-2002 were analysed. Weight gain during the previous 12 months was the outcome variable in logistic regression analyses. Independent variables included Karasek's job demands and job control, work fatigue, working overtime, work-related mental strain, social support and the work-home interface. The final models were adjusted for age, education, marital status, physical strain and body mass index. In the previous 12 months, 25% of women and 19% of men reported weight gain. Work fatigue and working overtime were associated with weight gain in both sexes. Women who were dissatisfied with combining paid work and family life were more likely to have gained weight. Men with low job demands were less likely to have gained weight. All of these associations were independent of each other. Few work-related factors were associated with weight gain. However, our study suggests that work fatigue and working overtime are potential risk factors for weight gain. These findings need to be confirmed in prospective studies.

  1. Cost-effectiveness of compression technologies for evidence-informed leg ulcer care: results from the Canadian Bandaging Trial

    PubMed Central

    2012-01-01

    Background Venous leg ulcers, affecting approximately 1% of the population, are costly to manage due to poor healing and high recurrence rates. We evaluated an evidence-informed leg ulcer care protocol with two frequently used high compression systems: ‘four-layer bandage’ (4LB) and ‘short-stretch bandage’ (SSB). Methods We conducted a cost-effectiveness analysis using individual patient data from the Canadian Bandaging Trial, a publicly funded, pragmatic, randomized trial evaluating high compression therapy with 4LB (n = 215) and SSB (n = 209) for community care of venous leg ulcers. We estimated costs (in 2009–2010 Canadian dollars) from the societal perspective and used a time horizon corresponding to each trial participant’s first year. Results Relative to SSB, 4LB was associated with an average 15 ulcer-free days gained, although the 95% confidence interval [−32, 21 days] crossed zero, indicating no treatment difference; an average health benefit of 0.009 QALYs gained [−0.019, 0.037] and overall, an average cost increase of $420 [$235, $739] (due to twice as many 4LB bandages used); or equivalently, a cost of $46,667 per QALY gained. If decision makers are willing to pay from $50,000 to $100,000 per QALY, the probability of 4LB being more cost effective increased from 51% to 63%. Conclusions Our findings differ from the emerging clinical and economic evidence that supports high compression therapy with 4LB, and therefore suggest another perspective on high compression practice, namely when delivered by trained registered nurses using an evidence-informed protocol, both 4LB and SSB systems offer comparable effectiveness and value for money. Trial registration ClinicalTrials.gov Identifier: NCT00202267 PMID:23031428

  2. Lossless Astronomical Image Compression and the Effects of Random Noise

    NASA Technical Reports Server (NTRS)

    Pence, William

    2009-01-01

    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  3. The perceptual learning of time-compressed speech: A comparison of training protocols with different levels of difficulty

    PubMed Central

    Gabay, Yafit; Karni, Avi; Banai, Karen

    2017-01-01

    Speech perception can improve substantially with practice (perceptual learning) even in adults. Here we compared the effects of four training protocols that differed in whether and how task difficulty was changed during a training session, in terms of the gains attained and the ability to apply (transfer) these gains to previously un-encountered items (tokens) and to different talkers. Participants trained in judging the semantic plausibility of sentences presented as time-compressed speech and were tested on their ability to reproduce, in writing, the target sentences; trail-by-trial feedback was afforded in all training conditions. In two conditions task difficulty (low or high compression) was kept constant throughout the training session, whereas in the other two conditions task difficulty was changed in an adaptive manner (incrementally from easy to difficult, or using a staircase procedure). Compared to a control group (no training), all four protocols resulted in significant post-training improvement in the ability to reproduce the trained sentences accurately. However, training in the constant-high-compression protocol elicited the smallest gains in deciphering and reproducing trained items and in reproducing novel, untrained, items after training. Overall, these results suggest that training procedures that start off with relatively little signal distortion (“easy” items, not far removed from standard speech) may be advantageous compared to conditions wherein severe distortions are presented to participants from the very beginning of the training session. PMID:28545039

  4. Fast Ignition Thermonuclear Fusion: Enhancement of the Pellet Gain by the Colossal-Magnetic-Field Shells

    NASA Astrophysics Data System (ADS)

    Stefan, V. Alexander

    2013-10-01

    The fast ignition fusion pellet gain can be enhanced by a laser generated B-field shell. The B-field shell, (similar to Earth's B-field, but with the alternating B-poles), follows the pellet compression in a frozen-in B-field regime. A properly designed laser-pellet coupling can lead to the generation of a B-field shell, (up to 100 MG), which inhibits electron thermal transport and confines the alpha-particles. In principle, a pellet gain of few-100s can be achieved in this manner. Supported in part by Nikola Tesla Labs, Stefan University, 1010 Pearl, La Jolla, CA 92038-1007.

  5. Compression mechanisms in the plasma focus pinch

    NASA Astrophysics Data System (ADS)

    Lee, S.; Saw, S. H.; Ali, Jalil

    2017-03-01

    The compression of the plasma focus pinch is a dynamic process, governed by the electrodynamics of pinch elongation and opposed by the negative rate of change of current dI/dt associated with the current dip. The compressibility of the plasma is influenced by the thermodynamics primarily the specific heat ratio; with greater compressibility as the specific heat ratio γ reduces with increasing degree of freedom f of the plasma ensemble due to ionization energy for the higher Z (atomic number) gases. The most drastic compression occurs when the emitted radiation of a high-Z plasma dominates the dynamics leading in extreme cases to radiative collapse which is terminated only when the compressed density is sufficiently high for the inevitable self-absorption of radiation to occur. We discuss the central pinch equation which contains the basic electrodynamic terms with built-in thermodynamic factors and a dQ/dt term; with Q made up of a Joule heat component and absorption-corrected radiative terms. Deuterium is considered as a thermodynamic reference (fully ionized perfect gas with f = 3) as well as a zero-radiation reference (bremsstrahlung only; with radiation power negligible compared with electrodynamic power). Higher Z gases are then considered and regimes of thermodynamic enhancement of compression are systematically identified as are regimes of radiation-enhancement. The code which incorporates all these effects is used to compute pinch radius ratios in various gases as a measure of compression. Systematic numerical experiments reveal increasing severity in radiation-enhancement of compressions as atomic number increases. The work progresses towards a scaling law for radiative collapse and a generalized specific heat ratio incorporating radiation.

  6. Results of subscale MTF compression experiments

    NASA Astrophysics Data System (ADS)

    Howard, Stephen; Mossman, A.; Donaldson, M.; Fusion Team, General

    2016-10-01

    In magnetized target fusion (MTF) a magnetized plasma torus is compressed in a time shorter than its own energy confinement time, thereby heating to fusion conditions. Understanding plasma behavior and scaling laws is needed to advance toward a reactor-scale demonstration. General Fusion is conducting a sequence of subscale experiments of compact toroid (CT) plasmas being compressed by chemically driven implosion of an aluminum liner, providing data on several key questions. CT plasmas are formed by a coaxial Marshall gun, with magnetic fields supported by internal plasma currents and eddy currents in the wall. Configurations that have been compressed so far include decaying and sustained spheromaks and an ST that is formed into a pre-existing toroidal field. Diagnostics measure B, ne, visible and x-ray emission, Ti and Te. Before compression the CT has an energy of 10kJ magnetic, 1 kJ thermal, with Te of 100 - 200 eV, ne 5x1020 m-3. Plasma was stable during a compression factor R0/R >3 on best shots. A reactor scale demonstration would require 10x higher initial B and ne but similar Te. Liner improvements have minimized ripple, tearing and ejection of micro-debris. Plasma facing surfaces have included plasma-sprayed tungsten, bare Cu and Al, and gettering with Ti and Li.

  7. Maternal obesity and gestational weight gain are risk factors for infant death.

    PubMed

    Bodnar, Lisa M; Siminerio, Lara L; Himes, Katherine P; Hutcheon, Jennifer A; Lash, Timothy L; Parisi, Sara M; Abrams, Barbara

    2016-02-01

    Assessment of the joint and independent relationships of gestational weight gain and prepregnancy body mass index (BMI) on risk of infant mortality was performed. This study used Pennsylvania linked birth-infant death records (2003-2011) from infants without anomalies born to mothers with prepregnancy BMI categorized as underweight (n = 58,973), normal weight (n = 610,118), overweight (n = 296,630), grade 1 obesity (n = 147,608), grade 2 obesity (n = 71,740), and grade 3 obesity (n = 47,277). Multivariable logistic regression models stratified by BMI category were used to estimate dose-response associations between z scores of gestational weight gain and infant death after confounder adjustment. Infant mortality risk was lowest among normal-weight women and increased with rising BMI category. For all BMI groups except for grade 3 obesity, there were U-shaped associations between gestational weight gain and risk of infant death. Weight loss and very low weight gain among women with grades 1 and 2 obesity were associated with high risks of infant mortality. However, even when gestational weight gain in women with obesity was optimized, the predicted risk of infant death remained higher than that of normal-weight women. Interventions aimed at substantially reducing preconception weight among women with obesity and avoiding very low or very high gestational weight gain may reduce risk of infant death. © 2015 The Obesity Society.

  8. Theory of compressive modeling and simulation

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  9. Compression in Working Memory and Its Relationship with Fluid Intelligence

    ERIC Educational Resources Information Center

    Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien

    2018-01-01

    Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between…

  10. Subjective and objective assessment of patients' compression therapy skills as a predicator of ulcer recurrence.

    PubMed

    Mościcka, Paulina; Szewczyk, Maria T; Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna

    2016-07-01

    To verify whether the subjectively and objectively assessed patient's skills in applying compression therapy constitute a predicting factor of venous ulcer recurrence. Systematic implementation of compression therapy by the patient is a core of prophylaxis for recurrent ulcers. Therefore, patient education constitutes a significant element of care. However, controversies remain if all individuals benefit equally from education. A retrospective analysis. The study included medical records of patients with venous ulcers (n = 351) treated between 2001 and 2011 at the Clinic for Chronic Wounds at Bydgoszcz Clinical Hospital. We compared two groups of patients, (1) with at least one episode of recurrent ulcer during the five-year observation period, and (2) without recurrences throughout the analysed period in terms of their theoretical skills and knowledge on compression therapy recorded at baseline and after one month. Very good self-assessment of a patient's compression therapy skills and weak assessment of these skills by a nurse proved significant risk factors for recurrence of the ulcers on univariate analysis. The significance of these variables as independent risk factors for recurrent ulcers has been also confirmed on multivariate analysis, which also took into account other clinical parameters. Building up proper compression therapy skills among the patients should be the key element of a properly construed nurse-based prophylactic program, as it is the most significant modifiable risk factor for recurrent ulcers. Although the development of compression skills is undeniably important, also other factors should be considered, e.g. surgical correction of superficial reflux. Instruction on compression therapy should be conducted by properly trained nursing personnel - the nurses should have received both content and psychological training. The compression therapy training should contain practical instruction with guided exercises and in-depth objective

  11. Investigation of Spheromak Plasma Cooling through Metallic Liner Spallation during Compression

    NASA Astrophysics Data System (ADS)

    Ross, Keeton; Mossman, Alex; Young, William; Ivanov, Russ; O'Shea, Peter; Howard, Stephen

    2016-10-01

    Various magnetic-target fusion (MTF) reactor concepts involve a preliminary magnetic confinement stage, followed by a metallic liner implosion that compresses the plasma to fusion conditions. The process is repeated to produce a pulsed, net-gain energy system. General Fusion, Inc. is pursuing one scheme that involves the compression of spheromak plasmas inside a liner formed by a collapsing vortex of liquid Pb-Li. The compression is driven by focused acoustic waves launched by gas-driven piston impacts. Here we describe a project to exploring the effects of possible liner spallation during compression on the spheromaks temperature, lifetime, and stability. We employ a 1 J, 10 ns pulsed YAG laser at 532nm focused onto a thin film of Li or Al to inject a known quantity of metallic impurities into a spheromak plasma and then measure the response. Diagnostics including visible and ultraviolet spectrometers, ion Doppler, B-probes, and Thomson scattering are used for plasma characterization. We then plan to apply the trends measured under these controlled conditions to evaluate the role of wall impurities during `field shots', where spheromaks are compressed through a chemically driven implosion of an aluminum flux conserver. The hope is that with further study we could more accurately include the effect of wall impurities on the fusion yield of a reactor-scale MTF system. Experimental procedures and results are presented, along with their relation to other liner-driven, MTF schemes. -/a

  12. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  13. Universal data compression

    NASA Astrophysics Data System (ADS)

    Lindsay, R. A.; Cox, B. V.

    Universal and adaptive data compression techniques have the capability to globally compress all types of data without loss of information but have the disadvantage of complexity and computation speed. Advances in hardware speed and the reduction of computational costs have made universal data compression feasible. Implementations of the Adaptive Huffman and Lempel-Ziv compression algorithms are evaluated for performance. Compression ratios versus run times for different size data files are graphically presented and discussed in the paper. Required adjustments needed for optimum performance of the algorithms relative to theoretical achievable limits will be outlined.

  14. A novel "gain chip" concept for high-power lasers (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Li, Min; Li, Mingzhong; Wang, Zhenguo; Yan, Xiongwei; Jiang, Xinying; Zheng, Jiangang; Cui, Xudong; Zhang, Xiaomin

    2017-05-01

    High-power lasers, including high-peak power lasers (HPPL) and high-average power lasers (HAPL), attract much interest for enormous variety of applications in inertial fusion energy (IFE), materials processing, defense, spectroscopy, and high-field physics research. To meet the requirements of high efficiency and quality, a "gain chip" concept is proposed to properly design the pumping, cooling and lasing fields. The gain chip mainly consists of the laser diode arrays, lens duct, rectangle wave guide and slab-shaped gain media. For the pumping field, the pump light will be compressed and homogenized by the lens duct to high irradiance with total internal reflection, and further coupled into the gain media through its two edge faces. For the cooling field, the coolant travels along the flow channel created by the adjacent slabs in the other two edge-face direction, and cool the lateral faces of the gain media. For the lasing field, the laser beam travels through the lateral faces and experiences minimum thermal wavefront distortions. Thereby, these three fields are in orthogonality offering more spatial freedom to handle them during the construction of the lasers. Transverse gradient doping profiles for HPPL and HAPL have been employed to achieve uniform gain distributions (UGD) within the gain media, respectively. This UGD will improve the management for both amplified spontaneous emission (ASE) and thermal behavior. Since each "gain chip" has its own pump source, power scaling can be easily achieved by placing identical "gain chips" along the laser beam axis without disturbing the gain and thermal distributions. To detail our concept, a 1-kJ pulsed amplifier is designed and optical-to-optical efficiency up to 40% has been obtained. We believe that with proper coolant (gas or liquid) and gain media (Yb:YAG, Nd:glass or Nd:YAG) our "gain chip" concept might provide a general configuration for high-power lasers with high efficiency and quality.

  15. Culture: copying, compression, and conventionality.

    PubMed

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, ; Smith, Tamariz, & Kirby, ). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning (storing patterns in memory) and reproducing (producing the patterns again). This paper manipulates the presence of learning in a simple iterated drawing design experiment. We find that learning seems to be the causal factor behind the increase in compressibility observed in the transmitted information, while reproducing is a source of random heritable innovations. Only a theory invoking these two aspects of cultural learning will be able to explain human culture's fundamental balance between stability and innovation. Copyright © 2014 Cognitive Science Society, Inc.

  16. Collaborative Wideband Compressed Signal Detection in Interplanetary Internet

    NASA Astrophysics Data System (ADS)

    Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei

    2014-07-01

    As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.

  17. Metastatic Spinal Cord Compression from Non-Small-Cell Lung Cancer Treated with Surgery and Adjuvant Therapies: A Retrospective Analysis of Outcomes and Prognostic Factors in 116 Patients.

    PubMed

    Tang, Yu; Qu, Jintao; Wu, Juan; Li, Song; Zhou, Yue; Xiao, Jianru

    2015-09-02

    Metastatic spinal cord compression is a disastrous consequence of non-small-cell lung cancer (NSCLC). There have been few studies of the outcomes or prognostic factors in patients with metastatic spinal cord compression from NSCLC treated with surgery and adjuvant therapies. From 2002 to 2013, 116 patients with metastatic spinal cord compression from NSCLC treated with surgery and adjuvant therapies were enrolled in this retrospective analysis. Kaplan-Meier methods and Cox regression analysis were used to estimate overall survival and identify prognostic factors for survival. Multivariate analysis suggested that the Eastern Cooperative Oncology Group performance status (ECOG-PS), preoperative and postoperative Frankel scores, postoperative adjuvant radiation therapy, and target therapy were independent prognostic factors. Ninety patients died at a median of twelve months (range, three to forty-seven months) postoperatively, and twenty-six patients were still alive at the time of final follow-up (at a median of fifteen months [range, five to fifty-four months]). The complete disappearance of deficits in spinal cord function after surgery was the most robust predictor of survival. Adjuvant radiation therapy and target therapy were also associated with a better prognosis. Prognostic Level IV. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  18. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  19. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.

    PubMed

    Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu

    2016-03-01

    An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.

  20. Multi-millijoule few-cycle mid-infrared pulses through nonlinear self-compression in bulk

    PubMed Central

    Shumakova, V.; Malevich, P.; Ališauskas, S.; Voronin, A.; Zheltikov, A. M.; Faccio, D.; Kartashov, D.; Baltuška, A.; Pugžlys, A.

    2016-01-01

    The physics of strong-field applications requires driver laser pulses that are both energetic and extremely short. Whereas optical amplifiers, laser and parametric, boost the energy, their gain bandwidth restricts the attainable pulse duration, requiring additional nonlinear spectral broadening to enable few or even single cycle compression and a corresponding peak power increase. Here we demonstrate, in the mid-infrared wavelength range that is important for scaling the ponderomotive energy in strong-field interactions, a simple energy-efficient and scalable soliton-like pulse compression in a mm-long yttrium aluminium garnet crystal with no additional dispersion management. Sub-three-cycle pulses with >0.44 TW peak power are compressed and extracted before the onset of modulation instability and multiple filamentation as a result of a favourable interplay between strong anomalous dispersion and optical nonlinearity around the wavelength of 3.9 μm. As a manifestation of the increased peak power, we show the evidence of mid-infrared pulse filamentation in atmospheric air. PMID:27620117

  1. Compression of CCD raw images for digital still cameras

    NASA Astrophysics Data System (ADS)

    Sriram, Parthasarathy; Sudharsanan, Subramania

    2005-03-01

    Lossless compression of raw CCD images captured using color filter arrays has several benefits. The benefits include improved storage capacity, reduced memory bandwidth, and lower power consumption for digital still camera processors. The paper discusses the benefits in detail and proposes the use of a computationally efficient block adaptive scheme for lossless compression. Experimental results are provided that indicate that the scheme performs well for CCD raw images attaining compression factors of more than two. The block adaptive method also compares favorably with JPEG-LS. A discussion is provided indicating how the proposed lossless coding scheme can be incorporated into digital still camera processors enabling lower memory bandwidth and storage requirements.

  2. Correlation between compressive strength and ultrasonic pulse velocity of high strength concrete incorporating chopped basalt fibre

    NASA Astrophysics Data System (ADS)

    Shafiq, Nasir; Fadhilnuruddin, Muhd; Elshekh, Ali Elheber Ahmed; Fathi, Ahmed

    2015-07-01

    Ultrasonic pulse velocity (UPV), is considered as the most important test for non-destructive techniques that are used to evaluate the mechanical characteristics of high strength concrete (HSC). The relationship between the compressive strength of HSC containing chopped basalt fibre stands (CBSF) and UPV was investigated. The concrete specimens were prepared using a different ratio of CBSF as internal strengthening materials. The compressive strength measurements were conducted at the sample ages of 3, 7, 28, 56 and 90 days; whilst, the ultrasonic pulse velocity was measured at 28 days. The result of HSC's compressive strength with the chopped basalt fibre did not show any improvement; instead, it was decreased. The UPV of the chopped basalt fibre reinforced concrete has been found to be less than that of the control mix for each addition ratio of the basalt fibre. A relationship plot is gained between the cube compressive strength for HSC and UPV with various amounts of chopped basalt fibres.

  3. Systems aspects of COBE science data compression

    NASA Technical Reports Server (NTRS)

    Freedman, I.; Boggess, E.; Seiler, E.

    1993-01-01

    A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.

  4. Influence of several factors on ignition lag in a compression-ignition engine

    NASA Technical Reports Server (NTRS)

    Gerrish, Harold C; Voss, Fred

    1932-01-01

    This investigation was made to determine the influence of fuel quality, injection advance angle, injection valve-opening pressure, inlet-air pressure, compression ratio, and engine speed on the time lag of auto-ignition of a Diesel fuel oil in a single-cylinder compression-ignition engine as obtained from an analysis of indicator diagrams. Three cam-operated fuel-injection pumps, two pumps cams, and an automatic injection valve with two different nozzles were used. Ignition lag was considered to be the interval between the start of injection of the fuel as determined with a Stroborama and the start of effective combustion as determined from the indicator diagram, the latter being the point where 4.0 x 10(exp-6) pound of fuel had been effectively burned. For this particular engine and fuel it was found that: (1) for a constant start and the same rate of fuel injection up the point of cut-off, a variation in fuel quantity from 1.2 x 10(exp-4) to 4.1 x 10(exp-4) pound per cycle has no appreciable effect on the ignition lag; (2) injection advance angle increases or decreases the lag according to whether density, temperature, or turbulence has the controlling influence; (3) increase in valve-opening pressure slightly increases the lag; and (4) increase of inlet-air pressure, compression ratio, and engine speed reduces the lag.

  5. Coding gains and error rates from the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Onyszchuk, I. M.

    1991-01-01

    A prototype hardware Big Viterbi Decoder (BVD) was completed for an experiment with the Galileo Spacecraft. Searches for new convolutional codes, studies of Viterbi decoder hardware designs and architectures, mathematical formulations, and decompositions of the deBruijn graph into identical and hierarchical subgraphs, and very large scale integration (VLSI) chip design are just a few examples of tasks completed for this project. The BVD bit error rates (BER), measured from hardware and software simulations, are plotted as a function of bit signal to noise ratio E sub b/N sub 0 on the additive white Gaussian noise channel. Using the constraint length 15, rate 1/4, experimental convolutional code for the Galileo mission, the BVD gains 1.5 dB over the NASA standard (7,1/2) Maximum Likelihood Convolution Decoder (MCD) at a BER of 0.005. At this BER, the same gain results when the (255,233) NASA standard Reed-Solomon decoder is used, which yields a word error rate of 2.1 x 10(exp -8) and a BER of 1.4 x 10(exp -9). The (15, 1/6) code to be used by the Cometary Rendezvous Asteroid Flyby (CRAF)/Cassini Missions yields 1.7 dB of coding gain. These gains are measured with respect to symbols input to the BVD and increase with decreasing BER. Also, 8-bit input symbol quantization makes the BVD resistant to demodulated signal-level variations which may cause higher bandwidth than the NASA (7,1/2) code, these gains are offset by about 0.1 dB of expected additional receiver losses. Coding gains of several decibels are possible by compressing all spacecraft data.

  6. The Influence of Compression Stocking on Jumping Performance of Athlete

    NASA Astrophysics Data System (ADS)

    Salleh, M. N.; Lazim, H. M.; Lamsali, H.; Salleh, A. F.

    2018-05-01

    Evidence of compression stocking effectiveness are mixed, with some researchers suggests that the stocking can enhance performance while others dispute the finding. One of the factors that are thought to cause the mixed results is level of pressure used in their studies. This research had organized a test on fourteen athletes. Their body was scanned and a customized compression stocking which can exert pressure correspond to the intended one was developed. An experiment was conducted to measure the effect of wearing compression stocking on jumping performance. The results show mixed outcomes. For the female athlete, there is a significant difference between wearing and not wearing compression stocking (p<0.05) on knee power. However, there is no significant difference for male athletes whether wearing or not.

  7. Inflammatory bowel disease: risk factors for adverse pregnancy outcome and the impact of maternal weight gain.

    PubMed

    Oron, Galia; Yogev, Yariv; Shcolnick, Smadar; Shkolnik, Smadar; Hod, Moshe; Fraser, Gerald; Wiznitzer, Arnon; Melamed, Nir

    2012-11-01

    To identify risk factors for adverse pregnancy outcome in women with inflammatory bowel disease (IBD) and to assess the effect of maternal pre-pregnancy weight and weight gain during pregnancy on pregnancy outcome. A retrospective, matched control study of all gravid women with IBD treated in a single tertiary center. Data were compared with healthy controls matched to by age, parity and pre-pregnancy BMI in a 3:1 ratio. Overall, 300 women were enrolled, 75 women in the study group (28 with ulcerative colitis and 47 with Crohn's disease) and 225 in the control group. The rates of preterm delivery and small for gestational age were higher in the study group (13.3 vs. 5.3% p = 0.02 and 6.7 vs. 0.9%, p = 0.004). The rate of cesarean section (36 vs. 19.1%; p = 0.002), NICU admission (10.7 vs. 4.0%, p = 0.03) and low 5-Min Apgar (4.0 vs. 0.4%, p = 0.02) were increased in the study group. Disease activity within 3 months of conception [OR 8.4 (1.3-16.3)] and maternal weight gain of less than 12 kg. [OR 3.6 (1.1-12.2)] were associated with adverse pregnancy outcome. Active disease at conception and inappropriate weight gain during pregnancy are associated with increased adverse pregnancy outcome in patients with IBD.

  8. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  9. The level of awareness and the attitude of patients recommended for use of compression stockings in Turkish society, and investigation of the factors affecting their use.

    PubMed

    Manduz, Şinasi; Ada, Fatih; Ada, Yusuf

    2018-01-01

    The purpose of this study was to reveal the treatment outlook, usage habits, and factors affecting these habits, in addition to providing suggestions for solutions for patients who are frequently recommended the use of compression stockings as treatment for conditions such as chronic venous insufficiency, deep vein thrombosis, lymphedema, and pregnancy. The study was conducted as a face-to-face questionnaire session with 1,004 patients who had previously registered at the cardiovascular surgeon's polyclinic of Sivas Numune Hospital between March 29, 2017, and October 31, 2017. In the study, basic criteria such as the patients' history, physical examination findings, and the use of compression stockings were evaluated. The survey was conducted in patients who were recommended compression stockings treatment for conditions such as chronic venous insufficiency, deep vein thrombosis, lymphedema, or pregnancy. The patients were asked about their demographics, characteristics of the compression stockings, whether compression stockings were used or not, and doctor evaluations related to the diagnosis. At the end of the study, it was found that 20.5% of the patients who were recommended compression stockings never bought them and only 11.5% of the patients regularly used them. Another surprising detail was that only 54.7% of the patients thought that the compression stockings were part of the treatment and 44.0% of the patients thought that they would benefit from using them. In many guidelines, use of compression stockings is the cornerstone of treatment of venous diseases. However, when the treatment incompatibility of the patients is taken into account, many duties fall to the doctors. The first of these is to inform the patient about the treatment and to answer any questions from the patients. In addition, the socioeconomic and sociocultural status of patients should be considered by the doctors.

  10. The level of awareness and the attitude of patients recommended for use of compression stockings in Turkish society, and investigation of the factors affecting their use

    PubMed Central

    Manduz, Şinasi; Ada, Fatih; Ada, Yusuf

    2018-01-01

    Objective The purpose of this study was to reveal the treatment outlook, usage habits, and factors affecting these habits, in addition to providing suggestions for solutions for patients who are frequently recommended the use of compression stockings as treatment for conditions such as chronic venous insufficiency, deep vein thrombosis, lymphedema, and pregnancy. Methods The study was conducted as a face-to-face questionnaire session with 1,004 patients who had previously registered at the cardiovascular surgeon’s polyclinic of Sivas Numune Hospital between March 29, 2017, and October 31, 2017. In the study, basic criteria such as the patients’ history, physical examination findings, and the use of compression stockings were evaluated. The survey was conducted in patients who were recommended compression stockings treatment for conditions such as chronic venous insufficiency, deep vein thrombosis, lymphedema, or pregnancy. The patients were asked about their demographics, characteristics of the compression stockings, whether compression stockings were used or not, and doctor evaluations related to the diagnosis. Results At the end of the study, it was found that 20.5% of the patients who were recommended compression stockings never bought them and only 11.5% of the patients regularly used them. Another surprising detail was that only 54.7% of the patients thought that the compression stockings were part of the treatment and 44.0% of the patients thought that they would benefit from using them. Conclusion In many guidelines, use of compression stockings is the cornerstone of treatment of venous diseases. However, when the treatment incompatibility of the patients is taken into account, many duties fall to the doctors. The first of these is to inform the patient about the treatment and to answer any questions from the patients. In addition, the socioeconomic and sociocultural status of patients should be considered by the doctors. PMID:29588577

  11. Recce imagery compression options

    NASA Astrophysics Data System (ADS)

    Healy, Donald J.

    1995-09-01

    The errors introduced into reconstructed RECCE imagery by ATARS DPCM compression are compared to those introduced by the more modern DCT-based JPEG compression algorithm. For storage applications in which uncompressed sensor data is available JPEG provides better mean-square-error performance while also providing more flexibility in the selection of compressed data rates. When ATARS DPCM compression has already been performed, lossless encoding techniques may be applied to the DPCM deltas to achieve further compression without introducing additional errors. The abilities of several lossless compression algorithms including Huffman, Lempel-Ziv, Lempel-Ziv-Welch, and Rice encoding to provide this additional compression of ATARS DPCM deltas are compared. It is shown that the amount of noise in the original imagery significantly affects these comparisons.

  12. Compression failure of angle-ply laminates

    NASA Technical Reports Server (NTRS)

    Peel, Larry D.; Hyer, Michael W.; Shuart, Mark J.

    1991-01-01

    The present work deals with modes and mechanisms of failure in compression of angle-ply laminates. Experimental results were obtained from 42 angle-ply IM7/8551-7a specimens with a lay-up of ((plus or minus theta)/(plus or minus theta)) sub 6s where theta, the off-axis angle, ranged from 0 degrees to 90 degrees. The results showed four failure modes, these modes being a function of off-axis angle. Failure modes include fiber compression, inplane transverse tension, inplane shear, and inplane transverse compression. Excessive interlaminar shear strain was also considered as an important mode of failure. At low off-axis angles, experimentally observed values were considerably lower than published strengths. It was determined that laminate imperfections in the form of layer waviness could be a major factor in reducing compression strength. Previously developed linear buckling and geometrically nonlinear theories were used, with modifications and enhancements, to examine the influence of layer waviness on compression response. The wavy layer is described by a wave amplitude and a wave length. Linear elastic stress-strain response is assumed. The geometrically nonlinear theory, in conjunction with the maximum stress failure criterion, was used to predict compression failure and failure modes for the angle-ply laminates. A range of wave length and amplitudes were used. It was found that for 0 less than or equal to theta less than or equal to 15 degrees failure was most likely due to fiber compression. For 15 degrees less than theta less than or equal to 35 degrees, failure was most likely due to inplane transverse tension. For 35 degrees less than theta less than or equal to 70 degrees, failure was most likely due to inplane shear. For theta less than 70 degrees, failure was most likely due to inplane transverse compression. The fiber compression and transverse tension failure modes depended more heavily on wave length than on wave amplitude. Thus using a single

  13. Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidovits, Seth; Fisch, Nathaniel J.

    Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less

  14. Compressing turbulence and sudden viscous dissipation with compression-dependent ionization state

    DOE PAGES

    Davidovits, Seth; Fisch, Nathaniel J.

    2016-11-14

    Turbulent plasma flow, amplified by rapid three-dimensional compression, can be suddenly dissipated under continuing compression. Furthermore, this effect relies on the sensitivity of the plasma viscosity to the temperature, μ ~ T 5 / 2 . The plasma viscosity is also sensitive to the plasma ionization state. Here, we show that the sudden dissipation phenomenon may be prevented when the plasma ionization state increases during compression, and we demonstrate the regime of net viscosity dependence on compression where sudden dissipation is guaranteed. In addition, it is shown that, compared to cases with no ionization, ionization during compression is associated withmore » larger increases in turbulent energy and can make the difference between growing and decreasing turbulent energy.« less

  15. Evaluation of Algorithms for Compressing Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  16. Nonlinear pulse compression in pulse-inversion fundamental imaging.

    PubMed

    Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi

    2007-04-01

    Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.

  17. Fast Compressive Tracking.

    PubMed

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  18. A test data compression scheme based on irrational numbers stored coding.

    PubMed

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  19. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  20. Brillouin gain enhancement in nano-scale photonic waveguide

    NASA Astrophysics Data System (ADS)

    Nouri Jouybari, Soodabeh

    2018-05-01

    The enhancement of stimulated Brillouin scattering in nano-scale waveguides has a great contribution in the improvement of the photonic devices technology. The key factors in Brillouin gain are the electrostriction force and radiation pressure generated by optical waves in the waveguide. In this article, we have proposed a new scheme of nano-scale waveguide in which the Brillouin gain is considerably improved compared to the previously-reported schemes. The role of radiation pressure in the Brillouin gain was much higher than the role of the electrostriction force. The Brillouin gain strongly depends on the structural parameters of the waveguide and the maximum value of 12127 W-1 m-1 is obtained for the Brillouin gain.

  1. An experimental study on compressive behavior of rubble stone walls retrofitted with BFRP grids

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Jia, Bin; Li, Wenjing; Liu, Xiao; Yang, Dan; Deng, Chuanli

    2018-03-01

    An experimental study was conducted to investigate the compressive behavior of rubble stone walls retrofitted with BFRP grids. The experimental program consisted of four rubble stone walls: one unretrofitted rubble stone wall (reference wall) and three BFRP grids retrofitted rubble stone walls. The main purpose of the tests was to gain a better understanding of the compressive behavior of rubble stone walls retrofitted with different amount of BFRP grids. The experimental results showed that the reference wall failed with out-of-plane collapse due to poor connection between rubble stone blocks and the three BFRP grids retrofitted walls failed with BFRP grids rupture followed by out-of-plane collapse. The measured compressive strength of the BFRP grids retrofitted walls is about 1.4 to 2.5 times of that of the reference wall. Besides, the rubble stone wall retrofitted with the maximum amount of BFRP grids showed the minimum vertical and out-of-plane displacements under the same load.

  2. External cardiac compression may be harmful in some scenarios of pulseless electrical activity.

    PubMed

    Hogan, T S

    2012-10-01

    Pulseless electrical activity occurs when organised or semi-organised electrical activity of the heart persists but the product of systemic vascular resistance and the increase in systemic arterial flow generated by the ejection of the left venticular stroke volume is not sufficient to produce a clinically detectable pulse. Pulseless electrical activity encompasses a very heterogeneous variety of severe circulatory shock states ranging in severity from pseudo-cardiac arrest to effective cardiac arrest. Outcomes of cardiopulmonary resuscitation for pulseless electrical activity are generally poor. Impairment of cardiac filling is the limiting factor to cardiac output in many scenarios of pulseless electrical activity, including extreme vasodilatory shock states. There is no evidence that external cardiac compression can increase cardiac output when impaired cardiac filling is the limiting factor to cardiac output. If impaired cardiac filling is the limiting factor to cardiac output and the heart is effectively ejecting all the blood returning to it, then external cardiac compression can only increase cardiac output if it increases venous return and cardiac filling. Repeated cardiac compression asynchronous with the patient's cardiac cycle and raised mean intrathoracic pressure due to chest compression can be expected to reduce rather than to increase cardiac filling and therefore to reduce rather than to increase cardiac output in such circumstances. The hypothesis is proposed that the performance of external cardiac compression will have zero or negative effect on cardiac output in pulseless electrical activity when impaired cardiac filling is the limiting factor to cardiac output. External cardiac compression may be both directly and indirectly harmful to significant sub-groups of patients with pulseless electrical activity. We have neither evidence nor theory to provide comfort that external cardiac compression is not harmful in many scenarios of pulseless

  3. High Average Power Laser Gain Medium With Low Optical Distortion Using A Transverse Flowing Liquid Host

    DOEpatents

    Comaskey, Brian J.; Ault, Earl R.; Kuklo, Thomas C.

    2005-07-05

    A high average power, low optical distortion laser gain media is based on a flowing liquid media. A diode laser pumping device with tailored irradiance excites the laser active atom, ion or molecule within the liquid media. A laser active component of the liquid media exhibits energy storage times longer than or comparable to the thermal optical response time of the liquid. A circulation system that provides a closed loop for mixing and circulating the lasing liquid into and out of the optical cavity includes a pump, a diffuser, and a heat exchanger. A liquid flow gain cell includes flow straighteners and flow channel compression.

  4. Effects of having a baby on weight gain.

    PubMed

    Brown, Wendy J; Hockey, Richard; Dobson, Annette J

    2010-02-01

    Women often blame weight gain in early adulthood on having a baby. The aim was to estimate the weight gain attributable to having a baby, after disentangling the effects of other factors that influence weight change at this life stage. A longitudinal study of a randomly selected cohort of 6458 Australian women, aged 18-23 years in 1996, was conducted. Self-report mailed surveys were completed in 1996, 2000, 2003, and 2006, and data were analyzed in 2008. On average, women gained weight at the rate of 0.93% per year (95% CI=0.89, 0.98) or 605 g/year (95% CI=580, 635) for a 65-kg woman. Over the 10-year study period, partnered women with one baby gained almost 4 kg more, and those with a partner but no baby gained 1.8 kg more, than unpartnered childless women (after adjustment for other significant factors: initial BMI and age; physical activity, sitting time, energy intake (2003); education level, hours in paid work, and smoking). Having a baby has a marked effect on 10-year weight gain, but there is also an effect attributable to getting married or living with a partner. Social and lifestyle as well as energy balance variables should be considered when developing strategies to prevent weight gain in young adult women. Copyright 2010 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  5. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  6. Low complexity lossless compression of underwater sound recordings.

    PubMed

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  7. On the wide-range bias dependence of transistor d.c. and small-signal current gain factors.

    NASA Technical Reports Server (NTRS)

    Schmidt, P.; Das, M. B.

    1972-01-01

    Critical reappraisal of the bias dependence of the dc and small-signal ac current gain factors of planar bipolar transistors over a wide range of currents. This is based on a straightforward consideration of the three basic components of the dc base current arising due to emitter-to-base injected minority carrier transport, base-to-emitter carrier injection, and emitter-base surface depletion layer recombination effects. Experimental results on representative n-p-n and p-n-p silicon devices are given which support most of the analytical findings.

  8. Prognostic factors in patients with metastatic spinal cord compression secondary to melanoma: a systematic review.

    PubMed

    Hadden, Nicholas J; McIntosh, Jerome R D; Jay, Samuel; Whittaker, Paula J

    2018-02-01

    Melanoma is one of the most common primary tumours associated with metastatic spinal cord compression (MSCC). The aim of this review is to identify prognostic factors specifically for MSCC secondary to melanoma. A systematic search of literature was performed in MEDLINE, Embase and the Cochrane Library to identify studies reporting prognostic factors for patients with MSCC secondary to melanoma. Two studies, involving a total of 39 patients, fulfilled the inclusion criteria. The variables associated with increased survival were receiving postoperative radiotherapy, receiving chemotherapy, perioperative lactate dehydrogenase level less than or equal to 8.0 µkat/l, preoperative haemoglobin level more than 11.5 mg/dl, an interval of 4 or more years between melanoma diagnosis and skeletal metastasis, absence of further skeletal metastases, absence of visceral metastases, Eastern Cooperative Oncology Group Performance Status of 2 or less, two or fewer involved vertebrae, being ambulatory preradiotherapy and an interval of more than 7 days between developing motor deficits and radiotherapy. The variables associated with good functional outcome were slow development of motor dysfunction, good performance status and being ambulatory before radiotherapy. The most important prognostic factors for survival are Eastern Cooperative Oncology Group Performance Status of 2 or less and absence of visceral metastases. There is a lack of studies looking specifically at prognostic factors for patients with MSCC secondary to melanoma, and the number of patients involved in the existing studies is small.

  9. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  10. Simplified Modeling of Steady-State and Transient Brillouin Gain in Magnetoactive Non-Centrosymmetric Semiconductors

    NASA Astrophysics Data System (ADS)

    Singh, M.; Aghamkar, P.; Sen, P. K.

    With the aid of a hydrodynamic model of semiconductor-plasmas, a detailed analytical investigation is made to study both the steady-state and the transient Brillouin gain in magnetized non-centrosymmetric III-V semiconductors arising from the nonlinear interaction of an intense pump beam with the internally-generated acoustic wave, due to piezoelectric and electrostrictive properties of the crystal. Using the fact that the origin of coherent Brillouin scattering (CBS) lies in the third-order (Brillouin) susceptibility of the medium, we obtained an expression of the gain coefficient of backward Stokes mode in steady-state and transient regimes and studied the dependence of piezoelectricity, magnetic field and pump pulse duration on its growth rate. The threshold-pump intensity and optimum pulse duration for the onset of transient CBS are estimated. The piezoelectricity and externally-applied magnetic field substantially enhances the transient CBS gain coefficient in III-V semiconductors which can be of great use in the compression of scattered pulses.

  11. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  12. A new compression format for fiber tracking datasets.

    PubMed

    Presseau, Caroline; Jodoin, Pierre-Marc; Houde, Jean-Christophe; Descoteaux, Maxime

    2015-04-01

    A single diffusion MRI streamline fiber tracking dataset may contain hundreds of thousands, and often millions of streamlines and can take up to several gigabytes of memory. This amount of data is not only heavy to compute, but also difficult to visualize and hard to store on disk (especially when dealing with a collection of brains). These problems call for a fiber-specific compression format that simplifies its manipulation. As of today, no fiber compression format has yet been adopted and the need for it is now becoming an issue for future connectomics research. In this work, we propose a new compression format, .zfib, for streamline tractography datasets reconstructed from diffusion magnetic resonance imaging (dMRI). Tracts contain a large amount of redundant information and are relatively smooth. Hence, they are highly compressible. The proposed method is a processing pipeline containing a linearization, a quantization and an encoding step. Our pipeline is tested and validated under a wide range of DTI and HARDI tractography configurations (step size, streamline number, deterministic and probabilistic tracking) and compression options. Similar to JPEG, the user has one parameter to select: a worst-case maximum tolerance error in millimeter (mm). Overall, we find a compression factor of more than 96% for a maximum error of 0.1mm without any perceptual change or change of diffusion statistics (mean fractional anisotropy and mean diffusivity) along bundles. This opens new opportunities for connectomics and tractometry applications. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    PubMed

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a <100 J capacitor bank, a laser-triggered switch, and a low-impedance (<1 Omega) strip line. The device has been integrated into a series of magnetic-flux-compression experiments on the 60 beam, 30 kJ OMEGA laser [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. The initial application is a novel magneto-inertial fusion approach [O. V. Gotchev et al., J. Fusion Energy 27, 25 (2008)] to inertial confinement fusion (ICF), where the amplified magnetic field can inhibit thermal conduction losses from the hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  14. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  15. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  16. Distributing value gain from three growth factors for yellow-poplar

    Treesearch

    Roger E. McCay

    1969-01-01

    A method of apportioning the maximum dollar value gain from tree growth into the amounts contributed by diameter growth, merchantable height increase, and quality improvement is described. The results of this method are presented for various sizes and qualities of yellow-poplar trees.

  17. Knowledge, attitudes, and beliefs regarding weight gain during pregnancy among Hispanic women.

    PubMed

    Tovar, Alison; Chasan-Taber, Lisa; Bermudez, Odilia I; Hyatt, Raymond R; Must, Aviva

    2010-11-01

    Pregnancy weight gain may be a risk factor for the development of obesity highlighting the importance of identifying psychosocial risk factors for pregnancy weight gain. The goal of this qualitative pilot study was to evaluate knowledge, attitudes and beliefs regarding weight gain during pregnancy among predominantly Puerto Rican women, a group with higher rates of obesity as compared to non-Hispanic white women. We conducted four focus groups stratified by level of acculturation and BMI. Women reported receiving advice about pregnancy weight gain predominantly from nutritionists and family members rather than from their physicians. The majority of overweight/obese women reported that they had not received any recommendations for weight gain during pregnancy from physicians. Pregnancy weight gain advice was not consistent with the 1990 Institute of Medicine Guidelines. Overall, attitudes towards weight gain recommendations differed by weight status, whereas feelings and dietary beliefs about weight gain differed according to level of acculturation. Our findings inform behavior change strategies for meeting pregnancy weight gain recommendations.

  18. Knowledge, Attitudes, and Beliefs Regarding Weight Gain During Pregnancy Among Hispanic Women

    PubMed Central

    Chasan-Taber, Lisa; Bermudez, Odilia I.; Hyatt, Raymond R.; Must, Aviva

    2012-01-01

    Pregnancy weight gain may be a risk factor for the development of obesity highlighting the importance of identifying psychosocial risk factors for pregnancy weight gain. The goal of this qualitative pilot study was to evaluate knowledge, attitudes and beliefs regarding weight gain during pregnancy among predominantly Puerto Rican women, a group with higher rates of obesity as compared to non-Hispanic white women. We conducted four focus groups stratified by level of acculturation and BMI. Women reported receiving advice about pregnancy weight gain predominantly from nutritionists and family members rather than from their physicians. The majority of overweight/obese women reported that they had not received any recommendations for weight gain during pregnancy from physicians. Pregnancy weight gain advice was not consistent with the 1990 Institute of Medicine Guidelines. Overall, attitudes towards weight gain recommendations differed by weight status, whereas feelings and dietary beliefs about weight gain differed according to level of acculturation. Our findings inform behavior change strategies for meeting pregnancy weight gain recommendations. PMID:19760160

  19. Weight gain following treatment of hyperthyroidism.

    PubMed

    Dale, J; Daykin, J; Holder, R; Sheppard, M C; Franklyn, J A

    2001-08-01

    Patients frequently express concern that treating hyperthyroidism will lead to excessive weight gain. This study aimed to determine the extent of, and risk factors for, weight gain in an unselected group of hyperthyroid patients. We investigated 162 consecutive hyperthyroid patients followed for at least 6 months. Height, weight, clinical features, biochemistry and management were recorded at each clinic visit. Documented weight gain was 5.42 +/- 0.46 kg (mean +/- SE) and increase in BMI was 8.49 +/- 0.71%, over a mean 24.2 +/- 1.6 months. Pre-existing obesity, Graves' disease causing hyperthyroidism, weight loss before presentation and length of follow-up each independently predicted weight gain. Patients treated with thionamides or radioiodine gained a similar amount of weight (thionamides, n = 87, 5.16 +/- 0.63 kg vs. radioiodine, n = 62, 4.75 +/- 0.57 kg, P = 0.645), but patients who underwent thyroidectomy (n = 13) gained more weight (10.27 +/- 2.56 kg vs. others, P = 0.007). Development of hypothyroidism (even transiently) was associated with weight gain (never hypothyroid, n = 102, 4.57 +/- 0.52 kg, transiently hypothyroid, n = 29, 5.37 +/- 0.85 kg, on T4, n = 31, 8.06 +/- 1.42 kg, P = 0.014). This difference remained after correcting for length of follow-up. In the whole cohort, weight increased by 3.95 +/- 0.40 kg at 1 year (n = 144) to 9.91 +/- 1.62 kg after 4 years (n = 27) (P = 0.008), representing a mean weight gain of 3.66 +/- 0.44 kg/year. We have demonstrated marked weight gain after treatment of hyperthyroidism. Pre-existing obesity, a diagnosis of Graves' disease and prior weight loss independently predicted weight gain and weight continued to rise with time. Patients who became hypothyroid, despite T4 replacement, gained most weight.

  20. Solution of weakly compressible isothermal flow in landfill gas collection networks

    NASA Astrophysics Data System (ADS)

    Nec, Y.; Huculak, G.

    2017-12-01

    Pipe networks collecting gas in sanitary landfills operate under the regime of a weakly compressible isothermal flow of ideal gas. The effect of compressibility has been traditionally neglected in this application in favour of simplicity, thereby creating a conceptual incongruity between the flow equations and thermodynamic equation of state. Here the flow is solved by generalisation of the classic Darcy-Weisbach equation for an incompressible steady flow in a pipe to an ordinary differential equation, permitting continuous variation of density, viscosity and related fluid parameters, as well as head loss or gain due to gravity, in isothermal flow. The differential equation is solved analytically in the case of ideal gas for a single edge in the network. Thereafter the solution is used in an algorithm developed to construct the flow equations automatically for a network characterised by an incidence matrix, and determine pressure distribution, flow rates and all associated parameters therein.

  1. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  2. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  3. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  4. Mammographic compression in Asian women.

    PubMed

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (p<0.0001). Compression parameters including compression force, compression pressure, CBT and breast contact area were widely variable between [relative standard deviation (RSD)≥21.0%] and within (p<0.0001) Asian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  5. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    PubMed

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  6. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  7. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  8. Biological sequence compression algorithms.

    PubMed

    Matsumoto, T; Sadakane, K; Imai, H

    2000-01-01

    Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.

  9. Tensile and Compressive Constitutive Response of 316 Stainless Steel at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Muralidharan, U.; Halford, G. R.

    1983-01-01

    Creep rate in compression is lower by factors of 2 to 10 than in tension if the microstructure of the two specimens is the same and are tested at equal temperatures and equal but opposite stresses. Such behavior is characteristic for monotonic creep and conditions involving cyclic creep. In the latter case creep rate in both tension and compression progressively increases from cycle to cycle, rendering questionable the possibility of expressing a time stabilized constitutive relationship. The difference in creep rates in tension and compression is considerably reduced if the tension specimen is first subjected to cycles of tensile creep (reversed by compressive plasticity), while the compression specimen is first subjected to cycles of compressive creep (reversed by tensile plasticity). In both cases, the test temperature is the same and the stresses are equal and opposite. Such reduction is a reflection of differences in microstructure of the specimens resulting from different prior mechanical history.

  10. Tensile and compressive constitutive response of 316 stainless steel at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Muralidharan, U.; Halford, G. R.

    1982-01-01

    It is demonstrated that creep rate of 316 SS is lower by factors of 2 to 10 in compression than in tension if the microstructure is the same and tests are conducted at identical temperatures and equal but opposite stresses. Such behavior was observed for both monotonic creep and conditions involving cyclic creep. In the latter case creep rate in both tension and compression progressively increases from cycle to cycle, rendering questionable the possibility of expressing a time-stabilized constitutive relationship. The difference in creep rates in tension and compression is considerably reduced if the tension specimen is first subjected to cycles of tensile creep (reversed by compressive plasticity), while the compression specimen is first subjected to cycles of compressive creep (reversed by tensile plasticity). In both cases, the test temperature is the same and the stresses are equal and opposite. Such reduction is a reflection of differences in microstructure of the specimens resulting from different prior mechanical history.

  11. Interviewing in Virtual Worlds: A Phenomenological Study Exploring the Success Factors of Job Applicants Utilizing Second Life to Gain Employment

    ERIC Educational Resources Information Center

    Koufoudakis-Whittington, Stefania

    2014-01-01

    This study explored the phenomenon of success factors of job applicants utilizing Second Life to gain employment. The study focused on identifying the perception of what qualified as a successful interview through the lived common experiences of 16 employment recruiters. The research problem was that a gap existed in scholarly research on…

  12. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  13. Energy Savings Potential and RD&D Opportunities for Non-Vapor-Compression HVAC Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    none,

    While vapor-compression technologies have served heating, ventilation, and air-conditioning (HVAC) needs very effectively, and have been the dominant HVAC technology for close to 100 years, the conventional refrigerants used in vapor-compression equipment contribute to global climate change when released to the atmosphere. This Building Technologies Office report: --Identifies alternatives to vapor-compression technology in residential and commercial HVAC applications --Characterizes these technologies based on their technical energy savings potential, development status, non-energy benefits, and other factors affecting end-user acceptance and their ability to compete with conventional vapor-compression systems --Makes specific research, development, and deployment (RD&D) recommendations to support further development ofmore » these technologies, should DOE choose to support non-vapor-compression technology further.« less

  14. Compressibility of binary powder formulations: investigation and evaluation with compaction equations.

    PubMed

    Gentis, Nicolaos D; Betz, Gabriele

    2012-02-01

    The purpose of this work was to investigate and evaluate the powder compressibility of binary mixtures containing a well-compressible compound (microcrystalline cellulose) and a brittle active drug (paracetamol and mefenamic acid) and its progression after a drug load increase. Drug concentration range was 0%-100% (m/m) with 10% intervals. The powder formulations were compacted to several relative densities with the Zwick material tester. The compaction force and tensile strength were fitted to several mathematical models that give representative factors for the powder compressibility. The factors k and C (Heckel and modified Heckel equation) showed mostly a nonlinear correlation with increasing drug load. The biggest drop in both factors occurred at far regions and drug load ranges. This outcome is crucial because in binary mixtures the drug load regions with higher changeover of plotted factors could be a hint for an existing percolation threshold. The susceptibility value (Leuenberger equation) showed varying values for each formulation without the expected trend of decrease for higher drug loads. The outcomes of this study showed the main challenges for good formulation design. Thus, we conclude that such mathematical plots are mandatory for a scientific evaluation and prediction of the powder compaction process. Copyright © 2011 Wiley Periodicals, Inc.

  15. Mental Aptitude and Comprehension of Time-Compressed and Compressed-Expanded Listening Selections.

    ERIC Educational Resources Information Center

    Sticht, Thomas G.

    The comprehensibility of materials compressed and then expanded by means of an electromechanical process was tested with 280 Army inductees divided into groups of high and low mental aptitude. Three short listening selections relating to military activities were subjected to compression and compression-expansion to produce seven versions. Data…

  16. CSAM: Compressed SAM format.

    PubMed

    Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew

    2016-12-15

    Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Real-time 3D video compression for tele-immersive environments

    NASA Astrophysics Data System (ADS)

    Yang, Zhenyu; Cui, Yi; Anwar, Zahid; Bocchino, Robert; Kiyanclar, Nadir; Nahrstedt, Klara; Campbell, Roy H.; Yurcik, William

    2006-01-01

    Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ~ 13 ms per 3D video frame).

  18. Fatigue life of additively manufactured Ti6Al4V scaffolds under tension-tension, tension-compression and compression-compression fatigue load.

    PubMed

    Lietaert, Karel; Cutolo, Antonio; Boey, Dries; Van Hooreweder, Brecht

    2018-03-21

    Mechanical performance of additively manufactured (AM) Ti6Al4V scaffolds has mostly been studied in uniaxial compression. However, in real-life applications, more complex load conditions occur. To address this, a novel sample geometry was designed, tested and analyzed in this work. The new scaffold geometry, with porosity gradient between the solid ends and scaffold middle, was successfully used for quasi-static tension, tension-tension (R = 0.1), tension-compression (R = -1) and compression-compression (R = 10) fatigue tests. Results show that global loading in tension-tension leads to a decreased fatigue performance compared to global loading in compression-compression. This difference in fatigue life can be understood fairly well by approximating the local tensile stress amplitudes in the struts near the nodes. Local stress based Haigh diagrams were constructed to provide more insight in the fatigue behavior. When fatigue life is interpreted in terms of local stresses, the behavior of single struts is shown to be qualitatively the same as bulk Ti6Al4V. Compression-compression and tension-tension fatigue regimes lead to a shorter fatigue life than fully reversed loading due to the presence of a mean local tensile stress. Fractographic analysis showed that most fracture sites were located close to the nodes, where the highest tensile stresses are located.

  19. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  20. Comparison of three portable instruments to measure compression pressure.

    PubMed

    Partsch, H; Mosti, G

    2010-10-01

    Measurement of interface pressure between the skin and a compression device has gained practical importance not only for characterizing the efficacy of different compression products in physiological and clinical studies but also for the training of medical staff. A newly developed portable pneumatic pressure transducer (Picopress®) was compared with two established systems (Kikuhime® and SIGaT tester®) measuring linearity, variability and accuracy on a cylindrical model using a stepwise inflated sphygmomanometer as the reference. In addition the variation coefficients were measured by applying the transducers repeatedly under a blood pressure cuff on the distal lower leg of a healthy human subject with stepwise inflation. In the pressure range between 10 and 80 mmHg all three devices showed a linear association compared with the sphygmomanometer values (Pearson r>0.99). The best reproducibility (variation coefficients between 1.05-7.4%) and the highest degree of accuracy demonstrated by Bland-Altman plots was achieved with the Picopress® transducer. Repeated measurements of pressure in a human leg revealed average variation coefficients for the three devices of 4.17% (Kikuhime®), 8.52% (SIGaT®) and 2.79% (Picopress®). The results suggest that the Picopress® transducer, which also allows dynamic pressure tracing in connection with a software program and which may be left under a bandage for several days, is a reliable instrument for measuring the pressure under a compression device.

  1. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2018-07-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  2. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    NASA Astrophysics Data System (ADS)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  3. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  4. The Compressibility Burble

    NASA Technical Reports Server (NTRS)

    Stack, John

    1935-01-01

    Simultaneous air-flow photographs and pressure-distribution measurements have been made of the NACA 4412 airfoil at high speeds in order to determine the physical nature of the compressibility burble. The flow photographs were obtained by the Schlieren method and the pressures were simultaneously measured for 54 stations on the 5-inch-chord wing by means of a multiple-tube photographic manometer. Pressure-measurement results and typical Schlieren photographs are presented. The general nature of the phenomenon called the "compressibility burble" is shown by these experiments. The source of the increased drag is the compression shock that occurs, the excess drag being due to the conversion of a considerable amount of the air-stream kinetic energy into heat at the compression shock.

  5. Strain-dependent dynamic compressive properties of magnetorheological elastomeric foams

    NASA Astrophysics Data System (ADS)

    Wereley, Norman M.; Perez, Colette; Choi, Young T.

    2018-05-01

    This paper addresses the strain-dependent dynamic compressive properties (i.e., so-called Payne effect) of magnetorheological elastomeric foams (MREFs). Isotropic MREF samples (i.e., no oriented particle chain structures), fabricated in flat square shapes (nominal size of 26.5 mm x 26.5 mm x 9.5 mm) were synthesized by randomly dispersing micron-sized iron oxide particles (Fe3O4) into a liquid silicone foam in the absence of magnetic field. Five different Fe3O4 particle concentrations of 0, 2.5, 5.0, 7.5, and 10 percent by volume fraction (hereinafter denoted as vol%) were used to investigate the effect of particle concentration on the dynamic compressive properties of the MREFs. The MREFs were sandwiched between two multi-pole flexible plate magnets in order to activate the magnetorheological (MR) strengthening effect. Under two different pre-compression conditions (i.e., 35% and 50%), the dynamic compressive stresses of the MREFs with respect to dynamic strain amplitudes (i.e., 1%-10%) were measured by using a servo-hydraulic testing machine. The complex modulus (i.e., storage modulus and loss modulus) and loss factors of the MREFs with respect to dynamic strain amplitudes were presented as performance indices to evaluate their strain-dependent dynamic compressive behavior.

  6. Gain-scheduled {{\\mathscr{H}}}_{\\infty } buckling control of a circular beam-column subject to time-varying axial loads

    NASA Astrophysics Data System (ADS)

    Schaeffner, Maximilian; Platz, Roland

    2018-06-01

    For slender beam-columns loaded by axial compressive forces, active buckling control provides a possibility to increase the maximum bearable axial load above that of a purely passive structure. In this paper, an approach for gain-scheduled {{\\mathscr{H}}}∞ buckling control of a slender beam-column with circular cross-section subject to time-varying axial loads is investigated experimentally. Piezo-elastic supports with integrated piezoelectric stack actuators at the beam-column ends allow an active stabilization in arbitrary lateral directions. The axial loads on the beam-column influence its lateral dynamic behavior and, eventually, cause the beam-column to buckle. A reduced modal model of the beam-column subject to axial loads including the dynamics of the electrical components is set up and calibrated with experimental data. Particularly, the linear parameter-varying open-loop plant is used to design a model-based gain-scheduled {{\\mathscr{H}}}∞ buckling control that is implemented in an experimental test setup. The beam-column is loaded by ramp- and step-shaped time-varying axial compressive loads that result in a lateral deformation of the beam-column due to imperfections, such as predeformation, eccentric loading or clamping moments. The lateral deformations and the maximum bearable loads of the beam-column are analyzed and compared for the beam-column with and without gain-scheduled {{\\mathscr{H}}}∞ buckling control or, respectively, active and passive configuration. With the proposed gain-scheduled {{\\mathscr{H}}}∞ buckling control it is possible to increase the maximum bearable load of the active beam-column by 19% for ramp-shaped axial loads and to significantly reduce the beam-column deformations for step-shaped axial loads compared to the passive structure.

  7. The Prevalence and Phenotype of Activated Microglia/Macrophages within the Spinal Cord of the Hyperostotic Mouse (twy/twy) Changes in Response to Chronic Progressive Spinal Cord Compression: Implications for Human Cervical Compressive Myelopathy

    PubMed Central

    Hirai, Takayuki; Uchida, Kenzo; Nakajima, Hideaki; Guerrero, Alexander Rodriguez; Takeura, Naoto; Watanabe, Shuji; Sugita, Daisuke; Yoshida, Ai; Johnson, William E. B.; Baba, Hisatoshi

    2013-01-01

    Background Cervical compressive myelopathy, e.g. due to spondylosis or ossification of the posterior longitudinal ligament is a common cause of spinal cord dysfunction. Although human pathological studies have reported neuronal loss and demyelination in the chronically compressed spinal cord, little is known about the mechanisms involved. In particular, the neuroinflammatory processes that are thought to underlie the condition are poorly understood. The present study assessed the localized prevalence of activated M1 and M2 microglia/macrophages in twy/twy mice that develop spontaneous cervical spinal cord compression, as a model of human disease. Methods Inflammatory cells and cytokines were assessed in compressed lesions of the spinal cords in 12-, 18- and 24-weeks old twy/twy mice by immunohistochemical, immunoblot and flow cytometric analysis. Computed tomography and standard histology confirmed a progressive spinal cord compression through the spontaneously development of an impinging calcified mass. Results The prevalence of CD11b-positive cells, in the compressed spinal cord increased over time with a concurrent decrease in neurons. The CD11b-positive cell population was initially formed of arginase-1- and CD206-positive M2 microglia/macrophages, which later shifted towards iNOS- and CD16/32-positive M1 microglia/macrophages. There was a transient increase in levels of T helper 2 (Th2) cytokines at 18 weeks, whereas levels of Th1 cytokines as well as brain-derived neurotrophic factor (BDNF), nerve growth factor (NGF) and macrophage antigen (Mac) −2 progressively increased. Conclusions Spinal cord compression was associated with a temporal M2 microglia/macrophage response, which may act as a possible repair or neuroprotective mechanism. However, the persistence of the neural insult also associated with persistent expression of Th1 cytokines and increased prevalence of activated M1 microglia/macrophages, which may lead to neuronal loss and demyelination

  8. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  9. The impact of posture and prolonged cyclic compressive loading on vertebral joint mechanics.

    PubMed

    Gooyers, Chad E; McMillan, Robert D; Howarth, Samuel J; Callaghan, Jack P

    2012-08-01

    An in vitro biomechanics investigation exposing porcine functional spinal units (FSUs) to submaximal cyclic or static compressive forces while in a flexed, neutral, or extended posture. To investigate the combined effect of cyclically applied compressive force (e.g., vibration) and postural deviation on intervertebral joint mechanics. Independently, prolonged vibration exposure and non-neutral postures are known risk factors for development of low back pain and injury. However, there is limited basic scientific evidence to explain how the risk of low back injury from vibration exposure is modified by other mechanical factors. This work examined the influence of static postural deviation on vertebral joint height loss and compressive stiffness under cyclically applied compressive force. Forty-eight FSUs, consisting of 2 adjacent vertebrae, ligaments, and the intervening intervertebral disc were included in the study. Each specimen was randomized to 1 of 3 experimental posture conditions (neutral, flexed, or extended) and assigned to 1 of 2 loading protocols, consisting of (1) cyclic (1500 ± 1200 N applied at 5 Hz using a sinusoidal waveform, resulting in 0.2 g rms acceleration) or (2) 1500 N of static compressive force. RESULTS.: As expected, FSU height loss followed a typical first-order response in both the static and cyclic loading protocols, with the majority (~50%) of the loss occurring in the first 20 minutes of testing. A significant interaction between posture and loading protocol (P < 0.001) was noted in the magnitude of FSU height loss. Subsequent analysis of simple effects revealed significant differences between cyclic and static loading protocols in both a neutral (P = 0.016) and a flexed posture (P < 0.0001). No significant differences (P = 0.320) were noted between pre/postmeasurements of FSU compressive stiffness. Posture is an important mechanical factor to consider when assessing the risk of injury from cyclic loading to the lumbar spine.

  10. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    PubMed

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  11. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  12. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  13. Material gain engineering in GeSn/Ge quantum wells integrated with an Si platform

    NASA Astrophysics Data System (ADS)

    Mączko, H. S.; Kudrawiec, R.; Gladysiewicz, M.

    2016-09-01

    It is shown that compressively strained Ge1-xSnx/Ge quantum wells (QWs) grown on a Ge substrate with 0.1 ≤ x ≤ 0.2 and width of 8 nm ≤ d ≤ 14 nm are a very promising gain medium for lasers integrated with an Si platform. Such QWs are type-I QWs with a direct bandgap and positive transverse electric mode of material gain, i.e. the modal gain. The electronic band structure near the center of Brillouin zone has been calculated for various Ge1-xSnx/Ge QWs with use of the 8-band kp Hamiltonian. To calculate the material gain for these QWs, occupation of the L valley in Ge barriers has been taken into account. It is clearly shown that this occupation has a lot of influence on the material gain in the QWs with low Sn concentrations (Sn < 15%) and is less important for QWs with larger Sn concentration (Sn > 15%). However, for QWs with Sn > 20% the critical thickness of a GeSn layer deposited on a Ge substrate starts to play an important role. Reduction in the QW width shifts up the ground electron subband in the QW and increases occupation of the L valley in the barriers instead of the Γ valley in the QW region.

  14. Reduction of oxidative stress by compression stockings in standing workers.

    PubMed

    Flore, Roberto; Gerardino, Laura; Santoliquido, Angelo; Catananti, Cesare; Pola, Paolo; Tondi, Paolo

    2007-08-01

    Healthy workers who stand for prolonged periods show enhanced production of reactive oxygen species (ROS) in their systemic circulation. Oxidative stress is thought to be a risk factor for chronic venous insufficiency and other systemic diseases. To evaluate the effectiveness of compression stockings in the prevention of oxidative stress at work. ROS and venous pressure of the lower limbs were measured in 55 theatre nurses who stood in the operating theatre for >6 h, 23 industrial ironers who stood for up to 5 h during their shift and 65 outpatient department nurses and 35 laundry workers who acted as controls. Subjects and controls were examined on two consecutive days before and after work and with and without compression stockings. Without compression stockings, lower limb venous pressure increased significantly after work in all subjects and controls (P < 0.001), while only operating theatre nurses showed significantly higher mean levels of ROS (P < 0.001). There was no significant difference in venous pressures and ROS levels after work in subjects or controls when wearing compression stockings. Our data suggest a preventive role of compression stockings against oxidative stress in healthy workers with a standing occupation.

  15. Influence of compression parameters on mechanical behavior of mozzarella cheese.

    PubMed

    Fogaça, Davi Novaes Ladeia; da Silva, William Soares; Rodrigues, Luciano Brito

    2017-10-01

    Studies on the interaction between direction and degree of compression in the Texture Profile Analysis (TPA) of cheeses are limited. For this reason the present study aimed to evaluate the mechanical properties of Mozzarella cheese by TPA at different compression degrees (65, 75, and 85%) and directions (axes X, Y, and Z). Data obtained were compared in order to identify possible interaction between both factors. Compression direction did not affect any mechanical variable, or rather, the cheese had an isotropic behavior for TPA. Compression degree had a significant influence (p < 0.05) on TPA responses, excepting for chewiness TPA (N), which remained constant. Data from texture profile were adjusted to models to explain the mechanical behavior according to the compression degree used in the test. The isotropic behavior observed may be result of differences in production method of Mozzarella cheese especially on stretching of cheese mass. Texture Profile Analysis (TPA) is a technique largely used to assess the mechanical properties of food, particularly cheese. The precise choice of the instrumental test configuration is essential for achieving results that represent the material analyzed. The method of manufacturing is another factor that may directly influence the mechanical properties of food. This can be seen, for instance, in stretched curd cheese, such as Mozzarella. Knowledge on such mechanical properties is highly relevant for food industries due to the mechanical resistance in piling, pressing, manufacture of packages, and food transport, or to melting features presented by the food at high temperatures in preparation of several foods, such as pizzas, snacks, sandwiches, and appetizers. © 2016 Wiley Periodicals, Inc.

  16. The Performance of Wavelets for Data Compression in Selected Military Applications

    DTIC Science & Technology

    1990-02-23

    reported. 14. SUBJECT TERMS IS. NUMBER OF PAGES 56 16. PRICE CODE 17. SICURITY CLASSIFICATION I lL SECURITY CLASSIFICATION 19. SECURITY CLASSIF4CATION 20...compression ratio is conservative in the sense that it understates the theoretical compression ratio by taking into account the actual memory...effect of reducing the compresion ratios quoted in the table by the factor 7.8/8.0 = 0.975. AWARE, Inc. 14 registration was then calculated for each

  17. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Compression Processes to Lossless

    DTIC Science & Technology

    1993-12-01

    0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by

  18. Massive data compression for parameter-dependent covariance matrices

    NASA Astrophysics Data System (ADS)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  19. Onboard Processor for Compressing HSI Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joe; Day, John H. (Technical Monitor)

    2002-01-01

    With EO-1 Hyperion and MightySat in orbit NASA and the DoD are showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor greater than 100, while retaining the necessary spectral fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our initial spectral compression experiments leverage commercial-off-the-shelf (COTS) spectral exploitation algorithms for segmentation, material identification and spectral compression that ASIT has developed. ASIT will also support the modification and integration of this COTS software into the OBP. Other commercially available COTS software for spatial compression will also be employed as part of the overall compression processing sequence. Over the next year elements of a high-performance reconfigurable OBP will be developed to implement proven preprocessing steps that distill the HSI data stream in both spectral and spatial dimensions. The system will intelligently reduce the volume of data that must be stored, transmitted to the ground, and processed while minimizing the loss of information.

  20. Compressing DNA sequence databases with coil.

    PubMed

    White, W Timothy J; Hendy, Michael D

    2008-05-20

    Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  1. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  2. Potential health gains and health losses in eleven EU countries attainable through feasible prevalences of the life-style related risk factors alcohol, BMI, and smoking: a quantitative health impact assessment.

    PubMed

    Lhachimi, Stefan K; Nusselder, Wilma J; Smit, Henriette A; Baili, Paolo; Bennett, Kathleen; Fernández, Esteve; Kulik, Margarete C; Lobstein, Tim; Pomerleau, Joceline; Boshuizen, Hendriek C; Mackenbach, Johan P

    2016-08-05

    Influencing the life-style risk-factors alcohol, body mass index (BMI), and smoking is an European Union (EU) wide objective of public health policy. The population-level health effects of these risk-factors depend on population specific characteristics and are difficult to quantify without dynamic population health models. For eleven countries-approx. 80 % of the EU-27 population-we used evidence from the publicly available DYNAMO-HIA data-set. For each country the age- and sex-specific risk-factor prevalence and the incidence, prevalence, and excess mortality of nine chronic diseases are utilized; including the corresponding relative risks linking risk-factor exposure causally to disease incidence and all-cause mortality. Applying the DYNAMO-HIA tool, we dynamically project the country-wise potential health gains and losses using feasible, i.e. observed elsewhere, risk-factor prevalence rates as benchmarks. The effects of the "worst practice", "best practice", and the currently observed risk-factor prevalence on population health are quantified and expected changes in life expectancy, morbidity-free life years, disease cases, and cumulative mortality are reported. Applying the best practice smoking prevalence yields the largest gains in life expectancy with 0.4 years for males and 0.3 year for females (approx. 332,950 and 274,200 deaths postponed, respectively) while the worst practice smoking prevalence also leads to the largest losses with 0.7 years for males and 0.9 year for females (approx. 609,400 and 710,550 lives lost, respectively). Comparing morbidity-free life years, the best practice smoking prevalence shows the highest gains for males with 0.4 years (342,800 less disease cases), whereas for females the best practice BMI prevalence yields the largest gains with 0.7 years (1,075,200 less disease cases). Smoking is still the risk-factor with the largest potential health gains. BMI, however, has comparatively large effects on morbidity. Future

  3. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    NASA Astrophysics Data System (ADS)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  4. Compact compressive arc and beam switchyard for energy recovery linac-driven ultraviolet free electron lasers

    NASA Astrophysics Data System (ADS)

    Akkermans, J. A. G.; Di Mitri, S.; Douglas, D.; Setija, I. D.

    2017-08-01

    High gain free electron lasers (FELs) driven by high repetition rate recirculating accelerators have received considerable attention in the scientific and industrial communities in recent years. Cost-performance optimization of such facilities encourages limiting machine size and complexity, and a compact machine can be realized by combining bending and bunch length compression during the last stage of recirculation, just before lasing. The impact of coherent synchrotron radiation (CSR) on electron beam quality during compression can, however, limit FEL output power. When methods to counteract CSR are implemented, appropriate beam diagnostics become critical to ensure that the target beam parameters are met before lasing, as well as to guarantee reliable, predictable performance and rapid machine setup and recovery. This article describes a beam line for bunch compression and recirculation, and beam switchyard accessing a diagnostic line for EUV lasing at 1 GeV beam energy. The footprint is modest, with 12 m compressive arc diameter and ˜20 m diagnostic line length. The design limits beam quality degradation due to CSR both in the compressor and in the switchyard. Advantages and drawbacks of two switchyard lines providing, respectively, off-line and on-line measurements are discussed. The entire design is scalable to different beam energies and charges.

  5. Compressing DNA sequence databases with coil

    PubMed Central

    White, W Timothy J; Hendy, Michael D

    2008-01-01

    Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794

  6. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.

    PubMed

    Singh, Anurag; Dandapat, Samarendra

    2017-04-01

    In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.

  7. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    NASA Astrophysics Data System (ADS)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  8. Galileo mission planning for Low Gain Antenna based operations

    NASA Technical Reports Server (NTRS)

    Gershman, R.; Buxbaum, K. L.; Ludwinski, J. M.; Paczkowski, B. G.

    1994-01-01

    The Galileo mission operations concept is undergoing substantial redesign, necessitated by the deployment failure of the High Gain Antenna, while the spacecraft is on its way to Jupiter. The new design applies state-of-the-art technology and processes to increase the telemetry rate available through the Low Gain Antenna and to increase the information density of the telemetry. This paper describes the mission planning process being developed as part of this redesign. Principal topics include a brief description of the new mission concept and anticipated science return (these have been covered more extensively in earlier papers), identification of key drivers on the mission planning process, a description of the process and its implementation schedule, a discussion of the application of automated mission planning tool to the process, and a status report on mission planning work to date. Galileo enhancements include extensive reprogramming of on-board computers and substantial hard ware and software upgrades for the Deep Space Network (DSN). The principal mode of operation will be onboard recording of science data followed by extended playback periods. A variety of techniques will be used to compress and edit the data both before recording and during playback. A highly-compressed real-time science data stream will also be important. The telemetry rate will be increased using advanced coding techniques and advanced receivers. Galileo mission planning for orbital operations now involves partitioning of several scarce resources. Particularly difficult are division of the telemetry among the many users (eleven instruments, radio science, engineering monitoring, and navigation) and allocation of space on the tape recorder at each of the ten satellite encounters. The planning process is complicated by uncertainty in forecast performance of the DSN modifications and the non-deterministic nature of the new data compression schemes. Key mission planning steps include

  9. Galileo mission planning for Low Gain Antenna based operations

    NASA Astrophysics Data System (ADS)

    Gershman, R.; Buxbaum, K. L.; Ludwinski, J. M.; Paczkowski, B. G.

    1994-11-01

    The Galileo mission operations concept is undergoing substantial redesign, necessitated by the deployment failure of the High Gain Antenna, while the spacecraft is on its way to Jupiter. The new design applies state-of-the-art technology and processes to increase the telemetry rate available through the Low Gain Antenna and to increase the information density of the telemetry. This paper describes the mission planning process being developed as part of this redesign. Principal topics include a brief description of the new mission concept and anticipated science return (these have been covered more extensively in earlier papers), identification of key drivers on the mission planning process, a description of the process and its implementation schedule, a discussion of the application of automated mission planning tool to the process, and a status report on mission planning work to date. Galileo enhancements include extensive reprogramming of on-board computers and substantial hard ware and software upgrades for the Deep Space Network (DSN). The principal mode of operation will be onboard recording of science data followed by extended playback periods. A variety of techniques will be used to compress and edit the data both before recording and during playback. A highly-compressed real-time science data stream will also be important. The telemetry rate will be increased using advanced coding techniques and advanced receivers. Galileo mission planning for orbital operations now involves partitioning of several scarce resources. Particularly difficult are division of the telemetry among the many users (eleven instruments, radio science, engineering monitoring, and navigation) and allocation of space on the tape recorder at each of the ten satellite encounters. The planning process is complicated by uncertainty in forecast performance of the DSN modifications and the non-deterministic nature of the new data compression schemes. Key mission planning steps include

  10. The estimation of uniaxial compressive strength conversion factor of trona and interbeds from point load tests and numerical modeling

    NASA Astrophysics Data System (ADS)

    Ozturk, H.; Altinpinar, M.

    2017-07-01

    The point load (PL) test is generally used for estimation of uniaxial compressive strength (UCS) of rocks because of its economic advantages and simplicity in testing. If the PL index of a specimen is known, the UCS can be estimated using conversion factors. Several conversion factors have been proposed by various researchers and they are dependent upon the rock type. In the literature, conversion factors on different sedimentary, igneous and metamorphic rocks can be found, but no study exists on trona. In this study, laboratory UCS and field PL tests were carried out on trona and interbeds of volcano-sedimentary rocks. Based on these tests, PL to UCS conversion factors of trona and interbeds are proposed. The tests were modeled numerically using a distinct element method (DEM) software, particle flow code (PFC), in an attempt to guide researchers having various types of modeling problems (excavation, cavern design, hydraulic fracturing, etc.) of the abovementioned rock types. Average PFC parallel bond contact model micro properties for the trona and interbeds were determined within this study so that future researchers can use them to avoid the rigorous PFC calibration procedure. It was observed that PFC overestimates the tensile strength of the rocks by a factor that ranges from 22 to 106.

  11. Continuous direct compression as manufacturing platform for sustained release tablets.

    PubMed

    Van Snick, B; Holman, J; Cunningham, C; Kumar, A; Vercruysse, J; De Beer, T; Remon, J P; Vervaet, C

    2017-03-15

    This study presents a framework for process and product development on a continuous direct compression manufacturing platform. A challenging sustained release formulation with high content of a poorly flowing low density drug was selected. Two HPMC grades were evaluated as matrix former: standard Methocel CR and directly compressible Methocel DC2. The feeding behavior of each formulation component was investigated by deriving feed factor profiles. The maximum feed factor was used to estimate the drive command and depended strongly upon the density of the material. Furthermore, the shape of the feed factor profile allowed definition of a customized refill regime for each material. Inline NIRs was used to estimate the residence time distribution (RTD) in the mixer and monitor blend uniformity. Tablet content and weight variability were determined as additional measures of mixing performance. For Methocel CR, the best axial mixing (i.e. feeder fluctuation dampening) was achieved when an impeller with high number of radial mixing blades operated at low speed. However, the variability in tablet weight and content uniformity deteriorated under this condition. One can therefore conclude that balancing axial mixing with tablet quality is critical for Methocel CR. However, reformulating with the direct compressible Methocel DC2 as matrix former improved tablet quality vastly. Furthermore, both process and product were significantly more robust to changes in process and design variables. This observation underpins the importance of flowability during continuous blending and die-filling. At the compaction stage, blends with Methocel CR showed better tabletability driven by a higher compressibility as the smaller CR particles have a higher bonding area. However, tablets of similar strength were achieved using Methocel DC2 by targeting equal porosity. Compaction pressure impacted tablet properties and dissolution. Hence controlling thickness during continuous manufacturing of

  12. A study of compressibility and compactibility of directly compressible tableting materials containing tramadol hydrochloride.

    PubMed

    Mužíková, Jitka; Kubíčková, Alena

    2016-09-01

    The paper evaluates and compares the compressibility and compactibility of directly compressible tableting materials for the preparation of hydrophilic gel matrix tablets containing tramadol hydrochloride and the coprocessed dry binders Prosolv® SMCC 90 and Disintequik™ MCC 25. The selected types of hypromellose are Methocel™ Premium K4M and Methocel™ Premium K100M in 30 and 50 % concentrations, the lubricant being magnesium stearate in a 1 % concentration. Compressibility is evaluated by means of the energy profile of compression process and compactibility by the tensile strength of tablets. The values of total energy of compression and plasticity were higher in the tableting materials containing Prosolv® SMCC 90 than in those containing Disintequik™ MCC 25. Tramadol slightly decreased the values of total energy of compression and plasticity. Tableting materials containing Prosolv® SMCC 90 yielded stronger tablets. Tramadol decreased the strength of tablets from both coprocessed dry binders.

  13. The compression of perceived time in a hot environment depends on physiological and psychological factors.

    PubMed

    Tamm, Maria; Jakobson, Ainika; Havik, Merle; Burk, Andres; Timpmann, Saima; Allik, Jüri; Oöpik, Vahur; Kreegipuu, Kairi

    2014-01-01

    The human perception of time was observed under extremely hot conditions. Young healthy men performed a time production task repeatedly in 4 experimental trials in either a temperate (22 °C, relative humidity 35%) or a hot (42 °C, relative humidity 18%) environment and with or without a moderate-intensity treadmill exercise. Within 1 hour, the produced durations indicated a significant compression of short intervals (0.5 to 10 s) in the combination of exercising and high ambient temperature, while neither variable/condition alone was enough to yield the effect. Temporal judgement was analysed in relation to different indicators of arousal, such as critical flicker frequency (CFF), core temperature, heart rate, and subjective ratings of fatigue and exertion. The arousal-sensitive internal clock model (originally proposed by Treisman) is used to explain the temporal compression while exercising in heat. As a result, we suggest that the psychological response to heat stress, the more precisely perceived fatigue, is important in describing the relationship between core temperature and time perception. Temporal compression is related to higher core temperature, but only if a certain level of perceived fatigue is accounted for, implying the existence of a thermoemotional internal clock.

  14. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  15. Compression fractures of the back

    MedlinePlus

    ... treatments. Surgery can include: Balloon kyphoplasty Vertebroplasty Spinal fusion Other surgery may be done to remove bone ... Alternative Names Vertebral compression fractures; Osteoporosis - compression fracture Images Compression fracture References Cosman F, de Beur SJ, ...

  16. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  17. Buckling behavior of origami unit cell facets under compressive loads

    NASA Astrophysics Data System (ADS)

    Kshad, Mohamed Ali Emhmed; Naguib, Hani E.

    2018-03-01

    Origami structures as cores for sandwich structures are designed to withstand the compressive loads and to dissipate compressive energy. The deformation of the origami panels and the unit cell facets are the primary factors behind the compressive energy dissipation in origami structures. During the loading stage, the origami structures deform through the folding and unfolding process of the unit cell facets, and also through the plastic deformation of the facets. This work presents a numerical study of the buckling behavior of different origami unit cell elements under compressive loading. The studied origami configurations were Miura and Ron-Resch-like origami structures. Finite element package was used to model the origami structures. The study investigated the buckling behavior of the unit cell facets of two types of origami structures Miura origami and Ron-Resch-Like origami structures. The simulation was conducted using ANSYS finite element software, in which the model of the unit cell represented by shell elements, and the eigenvalues buckling solver was used to predict the theoretical buckling of the unit cell elements.

  18. Data compression: The end-to-end information systems perspective for NASA space science missions

    NASA Technical Reports Server (NTRS)

    Tai, Wallace

    1991-01-01

    The unique characteristics of compressed data have important implications to the design of space science data systems, science applications, and data compression techniques. The sequential nature or data dependence between each of the sample values within a block of compressed data introduces an error multiplication or propagation factor which compounds the effects of communication errors. The data communication characteristics of the onboard data acquisition, storage, and telecommunication channels may influence the size of the compressed blocks and the frequency of included re-initialization points. The organization of the compressed data are continually changing depending on the entropy of the input data. This also results in a variable output rate from the instrument which may require buffering to interface with the spacecraft data system. On the ground, there exist key tradeoff issues associated with the distribution and management of the science data products when data compression techniques are applied in order to alleviate the constraints imposed by ground communication bandwidth and data storage capacity.

  19. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    NASA Technical Reports Server (NTRS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  20. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  1. Micro-Mechanical Analysis About Kink Band in Carbon Fiber/Epoxy Composites Under Longitudinal Compression

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Guan, Zhidong; Wang, Xiaodong; Du, Shanyi

    2017-10-01

    Kink band is a typical phenomenon for composites under longitudinal compression. In this paper, theoretical analysis and finite element simulation were conducted to analyze kink angle as well as compressive strength of composites. Kink angle was considered to be an important character throughout longitudinal compression process. Three factors including plastic matrix, initial fiber misalignment and rotation due to loading were considered for theoretical analysis. Besides, the relationship between kink angle and fiber volume fraction was improved and optimized by theoretical derivation. In addition, finite element models considering fiber stochastic strength and Drucker-Prager constitutive model for matrix were conducted in ABAQUS to analyze kink band formation process, which corresponded with the experimental results. Through simulation, the loading and failure procedure can be evidently divided into three stages: elastic stage, softening stage, and fiber break stage. It also shows that kink band is a result of fiber misalignment and plastic matrix. Different values of initial fiber misalignment angle, wavelength and fiber volume fraction were considered to explore the effects on compressive strength and kink angle. Results show that compressive strength increases with the decreasing of initial fiber misalignment angle, the decreasing of initial fiber misalignment wavelength and the increasing of fiber volume fraction, while kink angle decreases in these situations. Orthogonal array in statistics was also built to distinguish the effect degree of these factors. It indicates that initial fiber misalignment angle has the largest impact on compressive strength and kink angle.

  2. Biomedical sensor design using analog compressed sensing

    NASA Astrophysics Data System (ADS)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  3. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  4. Sensitivity Analysis in RIPless Compressed Sensing

    DTIC Science & Technology

    2014-10-01

    SECURITY CLASSIFICATION OF: The compressive sensing framework finds a wide range of applications in signal processing and analysis. Within this...Analysis of Compressive Sensing Solutions Report Title The compressive sensing framework finds a wide range of applications in signal processing and...compressed sensing. More specifically, we show that in a noiseless and RIP-less setting [11], the recovery process of a compressed sensing framework is

  5. Highly efficient frequency conversion with bandwidth compression of quantum light

    PubMed Central

    Allgaier, Markus; Ansari, Vahid; Sansoni, Linda; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Harder, Georg; Brecht, Benjamin; Silberhorn, Christine

    2017-01-01

    Hybrid quantum networks rely on efficient interfacing of dissimilar quantum nodes, as elements based on parametric downconversion sources, quantum dots, colour centres or atoms are fundamentally different in their frequencies and bandwidths. Although pulse manipulation has been demonstrated in very different systems, to date no interface exists that provides both an efficient bandwidth compression and a substantial frequency translation at the same time. Here we demonstrate an engineered sum-frequency-conversion process in lithium niobate that achieves both goals. We convert pure photons at telecom wavelengths to the visible range while compressing the bandwidth by a factor of 7.47 under preservation of non-classical photon-number statistics. We achieve internal conversion efficiencies of 61.5%, significantly outperforming spectral filtering for bandwidth compression. Our system thus makes the connection between previously incompatible quantum systems as a step towards usable quantum networks. PMID:28134242

  6. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    PubMed Central

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p < 0.05). The largest loss of ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  7. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  8. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  9. Survey of Header Compression Techniques

    NASA Technical Reports Server (NTRS)

    Ishac, Joseph

    2001-01-01

    This report provides a summary of several different header compression techniques. The different techniques included are: (1) Van Jacobson's header compression (RFC 1144); (2) SCPS (Space Communications Protocol Standards) header compression (SCPS-TP, SCPS-NP); (3) Robust header compression (ROHC); and (4) The header compression techniques in RFC2507 and RFC2508. The methodology for compression and error correction for these schemes are described in the remainder of this document. All of the header compression schemes support compression over simplex links, provided that the end receiver has some means of sending data back to the sender. However, if that return path does not exist, then neither Van Jacobson's nor SCPS can be used, since both rely on TCP (Transmission Control Protocol). In addition, under link conditions of low delay and low error, all of the schemes perform as expected. However, based on the methodology of the schemes, each scheme is likely to behave differently as conditions degrade. Van Jacobson's header compression relies heavily on the TCP retransmission timer and would suffer an increase in loss propagation should the link possess a high delay and/or bit error rate (BER). The SCPS header compression scheme protects against high delay environments by avoiding delta encoding between packets. Thus, loss propagation is avoided. However, SCPS is still affected by an increased BER (bit-error-rate) since the lack of delta encoding results in larger header sizes. Next, the schemes found in RFC2507 and RFC2508 perform well for non-TCP connections in poor conditions. RFC2507 performance with TCP connections is improved by various techniques over Van Jacobson's, but still suffers a performance hit with poor link properties. Also, RFC2507 offers the ability to send TCP data without delta encoding, similar to what SCPS offers. ROHC is similar to the previous two schemes, but adds additional CRCs (cyclic redundancy check) into headers and improves

  10. Postnatal Weight Gain Modifies Severity and Functional Outcome of Oxygen-Induced Proliferative Retinopathy

    PubMed Central

    Stahl, Andreas; Chen, Jing; Sapieha, Przemyslaw; Seaward, Molly R.; Krah, Nathan M.; Dennison, Roberta J.; Favazza, Tara; Bucher, Felicitas; Löfqvist, Chatarina; Ong, Huy; Hellström, Ann; Chemtob, Sylvain; Akula, James D.; Smith, Lois E.H.

    2010-01-01

    In clinical studies, postnatal weight gain is strongly associated with retinopathy of prematurity (ROP). However, animal studies are needed to investigate the pathophysiological mechanisms of how postnatal weight gain affects the severity of ROP. In the present study, we identify nutritional supply as one potent parameter that affects the extent of retinopathy in mice with identical birth weights and the same genetic background. Wild-type pups with poor postnatal nutrition and poor weight gain (PWG) exhibit a remarkably prolonged phase of retinopathy compared to medium weight gain or extensive weight gain pups. A high (r2 = 0.83) parabolic association between postnatal weight gain and oxygen-induced retinopathy severity is observed, as is a significantly prolonged phase of proliferative retinopathy in PWG pups (20 days) compared with extensive weight gain pups (6 days). The extended retinopathy is concomitant with prolonged overexpression of retinal vascular endothelial growth factor in PWG pups. Importantly, PWG pups show low serum levels of nonfasting glucose, insulin, and insulin-like growth factor-1 as well as high levels of ghrelin in the early postoxygen-induced retinopathy phase, a combination indicative of poor metabolic supply. These differences translate into visual deficits in adult PWG mice, as demonstrated by impaired bipolar and proximal neuronal function. Together, these results provide evidence for a pathophysiological correlation between poor postnatal nutritional supply, slow weight gain, prolonged retinal vascular endothelial growth factor overexpression, protracted retinopathy, and reduced final visual outcome. PMID:21056995

  11. Postnatal weight gain modifies severity and functional outcome of oxygen-induced proliferative retinopathy.

    PubMed

    Stahl, Andreas; Chen, Jing; Sapieha, Przemyslaw; Seaward, Molly R; Krah, Nathan M; Dennison, Roberta J; Favazza, Tara; Bucher, Felicitas; Löfqvist, Chatarina; Ong, Huy; Hellström, Ann; Chemtob, Sylvain; Akula, James D; Smith, Lois E H

    2010-12-01

    In clinical studies, postnatal weight gain is strongly associated with retinopathy of prematurity (ROP). However, animal studies are needed to investigate the pathophysiological mechanisms of how postnatal weight gain affects the severity of ROP. In the present study, we identify nutritional supply as one potent parameter that affects the extent of retinopathy in mice with identical birth weights and the same genetic background. Wild-type pups with poor postnatal nutrition and poor weight gain (PWG) exhibit a remarkably prolonged phase of retinopathy compared to medium weight gain or extensive weight gain pups. A high (r(2) = 0.83) parabolic association between postnatal weight gain and oxygen-induced retinopathy severity is observed, as is a significantly prolonged phase of proliferative retinopathy in PWG pups (20 days) compared with extensive weight gain pups (6 days). The extended retinopathy is concomitant with prolonged overexpression of retinal vascular endothelial growth factor in PWG pups. Importantly, PWG pups show low serum levels of nonfasting glucose, insulin, and insulin-like growth factor-1 as well as high levels of ghrelin in the early postoxygen-induced retinopathy phase, a combination indicative of poor metabolic supply. These differences translate into visual deficits in adult PWG mice, as demonstrated by impaired bipolar and proximal neuronal function. Together, these results provide evidence for a pathophysiological correlation between poor postnatal nutritional supply, slow weight gain, prolonged retinal vascular endothelial growth factor overexpression, protracted retinopathy, and reduced final visual outcome.

  12. Reversible Watermarking Surviving JPEG Compression.

    PubMed

    Zain, J; Clarke, M

    2005-01-01

    This paper will discuss the properties of watermarking medical images. We will also discuss the possibility of such images being compressed by JPEG and give an overview of JPEG compression. We will then propose a watermarking scheme that is reversible and robust to JPEG compression. The purpose is to verify the integrity and authenticity of medical images. We used 800x600x8 bits ultrasound (US) images in our experiment. SHA-256 of the image is then embedded in the Least significant bits (LSB) of an 8x8 block in the Region of Non Interest (RONI). The image is then compressed using JPEG and decompressed using Photoshop 6.0. If the image has not been altered, the watermark extracted will match the hash (SHA256) of the original image. The result shown that the embedded watermark is robust to JPEG compression up to image quality 60 (~91% compressed).

  13. Behavioral Treatment Approaches to Prevent Weight Gain Following Smoking Cessation.

    ERIC Educational Resources Information Center

    Grinstead, Olga A.

    Personality and physiological, cognitive, and environmental factors have all been suggested as critical variables in smoking cessation and relapse. Weight gain and the fear of weight gain after smoking cessation may also prevent many smokers from quitting. A sample of 45 adult smokers participated in a study in which three levels of preventive…

  14. Use of compression garments by women with lymphoedema secondary to breast cancer treatment.

    PubMed

    Longhurst, E; Dylke, E S; Kilbreath, S L

    2018-02-19

    This aim of this study was to determine the use of compression garments by women with lymphoedema secondary to breast cancer treatment and factors which underpin use. An online survey was distributed to the Survey and Review group of the Breast Cancer Network Australia. The survey included questions related to the participants' demographics, breast cancer and lymphoedema medical history, prescription and use of compression garments and their beliefs about compression and lymphoedema. Data were analysed using principal component analysis and multivariable logistic regression. Compression garments had been prescribed to 83% of 201 women with lymphoedema within the last 5 years, although 37 women had discontinued their use. Even when accounting for severity of swelling, type of garment(s) and advice given for use varied across participants. Use of compression garments was driven by women's beliefs that they were vulnerable to progression of their disease and that compression would prevent its worsening. Common reasons given as to why women had discontinued their use included discomfort, and their lymphoedema was stable. Participant characteristics associated with discontinuance of compression garments included their belief that (i) the garments were not effective in managing their condition, (ii) experienced mild-moderate swelling and/or (iii) had experienced swelling for greater than 5 years. The prescription of compression garments for lymphoedema is highly varied and may be due to lack of underpinning evidence to inform treatment.

  15. Compressed Sensing for Body MRI

    PubMed Central

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh

    2016-01-01

    The introduction of compressed sensing for increasing imaging speed in MRI has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than that are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This paper presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and non-linear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the paper discusses current challenges and future opportunities. PMID:27981664

  16. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  17. Phase diagram of matrix compressed sensing

    NASA Astrophysics Data System (ADS)

    Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka

    2016-12-01

    In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.

  18. Report from the 2013 meeting of the International Compression Club on advances and challenges of compression therapy.

    PubMed

    Delos Reyes, Arthur P; Partsch, Hugo; Mosti, Giovanni; Obi, Andrea; Lurie, Fedor

    2014-10-01

    The International Compression Club, a collaboration of medical experts and industry representatives, was founded in 2005 to develop consensus reports and recommendations regarding the use of compression therapy in the treatment of acute and chronic vascular disease. During the recent meeting of the International Compression Club, member presentations were focused on the clinical application of intermittent pneumatic compression in different disease scenarios as well as on the use of inelastic and short stretch compression therapy. In addition, several new compression devices and systems were introduced by industry representatives. This article summarizes the presentations and subsequent discussions and provides a description of the new compression therapies presented. Copyright © 2014 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  19. A Bunch Compression Method for Free Electron Lasers that Avoids Parasitic Compressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, Stephen V.; Douglas, David R.; Tennant, Christopher D.

    2015-09-01

    Virtually all existing high energy (>few MeV) linac-driven FELs compress the electron bunch length though the use of off-crest acceleration on the rising side of the RF waveform followed by transport through a magnetic chicane. This approach has at least three flaws: 1) it is difficult to correct aberrations--particularly RF curvature, 2) rising side acceleration exacerbates space charge-induced distortion of the longitudinal phase space, and 3) all achromatic "negative compaction" compressors create parasitic compression during the final compression process, increasing the CSR-induced emittance growth. One can avoid these deficiencies by using acceleration on the falling side of the RF waveformmore » and a compressor with M 56>0. This approach offers multiple advantages: 1) It is readily achieved in beam lines supporting simple schemes for aberration compensation, 2) Longitudinal space charge (LSC)-induced phase space distortion tends, on the falling side of the RF waveform, to enhance the chirp, and 3) Compressors with M 56>0 can be configured to avoid spurious over-compression. We will discuss this bunch compression scheme in detail and give results of a successful beam test in April 2012 using the JLab UV Demo FEL« less

  20. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout.

    PubMed

    Ni, Yang

    2018-02-14

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout.

  1. Gains in Life Expectancy Associated with Higher Education in Men

    PubMed Central

    Bijwaard, Govert E.; van Poppel, Frans; Ekamper, Peter; Lumey, L. H.

    2015-01-01

    Background Many studies show large differences in life expectancy across the range of education, intelligence, and socio-economic status. As educational attainment, intelligence, and socio-economic status are highly interrelated, appropriate methods are required to disentangle their separate effects. The aim of this paper is to present a novel method to estimate gains in life expectancy specifically associated with increased education. Our analysis is based on a structural model in which education level, IQ at age 18 and mortality all depend on (latent) intelligence. The model allows for (selective) educational choices based on observed factors and on an unobserved factor capturing intelligence. Our estimates are based on information from health examinations of military conscripts born in 1944–1947 in The Netherlands and their vital status through age 66 (n = 39,798). Results Our empirical results show that men with higher education have lower mortality. Using structural models to account for education choice, the estimated gain in life expectancy for men moving up one educational level ranges from 0.3 to 2 years. The estimated gain in months alive over the observational period ranges from -1.2 to 5.7 months. The selection effect is positive and amounts to a gain of one to two months. Decomposition of the selection effect shows that the gain from selection on (latent) intelligence is larger than the gain from selection on observed factors and amounts to 1.0 to 1.7 additional months alive. Conclusion Our findings confirm the strong selection into education based on socio-economic status and intelligence. They also show significant higher life expectancy among individuals with higher education after the selectivity of education choice has been taken into account. Based on these estimates, it is plausible therefore that increases in education could lead to increases in life expectancy. PMID:26496647

  2. Gains in Life Expectancy Associated with Higher Education in Men.

    PubMed

    Bijwaard, Govert E; van Poppel, Frans; Ekamper, Peter; Lumey, L H

    2015-01-01

    Many studies show large differences in life expectancy across the range of education, intelligence, and socio-economic status. As educational attainment, intelligence, and socio-economic status are highly interrelated, appropriate methods are required to disentangle their separate effects. The aim of this paper is to present a novel method to estimate gains in life expectancy specifically associated with increased education. Our analysis is based on a structural model in which education level, IQ at age 18 and mortality all depend on (latent) intelligence. The model allows for (selective) educational choices based on observed factors and on an unobserved factor capturing intelligence. Our estimates are based on information from health examinations of military conscripts born in 1944-1947 in The Netherlands and their vital status through age 66 (n = 39,798). Our empirical results show that men with higher education have lower mortality. Using structural models to account for education choice, the estimated gain in life expectancy for men moving up one educational level ranges from 0.3 to 2 years. The estimated gain in months alive over the observational period ranges from -1.2 to 5.7 months. The selection effect is positive and amounts to a gain of one to two months. Decomposition of the selection effect shows that the gain from selection on (latent) intelligence is larger than the gain from selection on observed factors and amounts to 1.0 to 1.7 additional months alive. Our findings confirm the strong selection into education based on socio-economic status and intelligence. They also show significant higher life expectancy among individuals with higher education after the selectivity of education choice has been taken into account. Based on these estimates, it is plausible therefore that increases in education could lead to increases in life expectancy.

  3. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  4. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  5. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 7 2013-07-01 2013-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  6. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 7 2012-07-01 2012-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  7. 29 CFR 1917.154 - Compressed air.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 7 2014-07-01 2014-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  8. Compressed storage of arterial pressure waveforms by selection of significant points.

    PubMed

    de Graaf, P M; van Goudoever, J; Wesseling, K H

    1997-09-01

    Continuous records of arterial blood pressure can be obtained non-invasively with Finapres, even for periods of 24 hours. Increasingly, storage of such records is done digitally, requiring large disc capacities. It is therefore necessary to find methods to store blood pressure waveforms in compressed form. The method of selection of significant points known from ECG data compression is adapted. Points are selected as significant wherever the first derivative of the pressure wave changes sign. As a second stage recursive partitioning is used to select additional points such that the difference between the selected points, linearly interpolated, and the original curve remains below a maximum. This method is tested on finger arterial pressure waveform epochs of 60 s duration taken from 32 patients with a wide range of blood pressures and heart rates. An average compression factor of 4.6 (SD 1.0) is obtained when accepting a maximum difference of 3 mmHg. The root mean squared error is 1 mmHg averaged over the group of patient waveforms. Clinically relevant parameters such as systolic, diastolic and mean pressure are reproduced with an offset error of less than 0.5 (0.3) mmHg and scatter less than 0.6 (0.1) mmHg. It is concluded that a substantial compression factor can be achieved with a simple and computationally fast algorithm and little deterioration in waveform quality and pressure level accuracy.

  9. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Radiation hardness of thin Low Gain Avalanche Detectors

    NASA Astrophysics Data System (ADS)

    Kramberger, G.; Carulla, M.; Cavallaro, E.; Cindro, V.; Flores, D.; Galloway, Z.; Grinstein, S.; Hidalgo, S.; Fadeyev, V.; Lange, J.; Mandić, I.; Medin, G.; Merlos, A.; McKinney-Martinez, F.; Mikuž, M.; Quirion, D.; Pellegrini, G.; Petek, M.; Sadrozinski, H. F.-W.; Seiden, A.; Zavrtanik, M.

    2018-05-01

    Low Gain Avalanche Detectors (LGAD) are based on a n++-p+-p-p++ structure where an appropriate doping of the multiplication layer (p+) leads to high enough electric fields for impact ionization. Gain factors of few tens in charge significantly improve the resolution of timing measurements, particularly for thin detectors, where the timing performance was shown to be limited by Landau fluctuations. The main obstacle for their operation is the decrease of gain with irradiation, attributed to effective acceptor removal in the gain layer. Sets of thin sensors were produced by two different producers on different substrates, with different gain layer doping profiles and thicknesses (45, 50 and 80 μm). Their performance in terms of gain/collected charge and leakage current was compared before and after irradiation with neutrons and pions up to the equivalent fluences of 5 ṡ 1015 cm-2. Transient Current Technique and charge collection measurements with LHC speed electronics were employed to characterize the detectors. The thin LGAD sensors were shown to perform much better than sensors of standard thickness (∼300 μm) and offer larger charge collection with respect to detectors without gain layer for fluences < 2 ṡ 1015 cm-2. Larger initial gain prolongs the beneficial performance of LGADs. Pions were found to be more damaging than neutrons at the same equivalent fluence, while no significant difference was found between different producers. At very high fluences and bias voltages the gain appears due to deep acceptors in the bulk, hence also in thin standard detectors.

  11. Effect of body image on pregnancy weight gain.

    PubMed

    Mehta, Ushma J; Siega-Riz, Anna Maria; Herring, Amy H

    2011-04-01

    The majority of women gain more weight during pregnancy than what is recommended. Since gestational weight gain is related to short and long-term maternal health outcomes, it is important to identify women at greater risk of not adhering to guidelines. The objective of this study was to examine the relationship between body image and gestational weight gain. The Body Image Assessment for Obesity tool was used to measure ideal and current body sizes in 1,192 women participating in the Pregnancy, Infection and Nutrition Study. Descriptive and multivariable techniques were used to assess the effects of ideal body size and discrepancy score (current-ideal body sizes), which reflected the level of body dissatisfaction, on gestational weight gain. Women who preferred to be thinner had increased risk of excessive gain if they started the pregnancy at a BMI ≤26 kg/m(2) but a decreased risk if they were overweight or obese. Comparing those who preferred thin body silhouettes to those who preferred average size silhouettes, low income women had increased risk of inadequate weight gain [RR = 1.76 (1.08, 2.88)] while those with lower education were at risk of excessive gain [RR = 1.11 (1.00, 1.22)]. Our results revealed that body image was associated with gestational weight gain but the relationship is complex. Identifying factors that affect whether certain women are at greater risk of gaining outside of guidelines may improve our ability to decrease pregnancy-related health problems.

  12. Influence of rate of force application during compression on tablet capping.

    PubMed

    Sarkar, Srimanta; Ooi, Shing Ming; Liew, Celine Valeria; Heng, Paul Wan Sia

    2015-04-01

    Root cause and possible processing remediation of tablet capping were investigated using a specially designed tablet press with an air compensator installed above the precompression roll to limit compression force and allow extended dwell time in the precompression event. Using acetaminophen-starch (77.9:22.1) as a model formulation, tablets were prepared by various combinations of precompression and main compression forces, set precompression thickness, and turret speed. The rate of force application (RFA) was the main factor contributing to the tablet mechanical strength and capping. When target force above the force required for strong interparticulate bond formation, the resultant high RFA contributed to more pronounced air entrapment, uneven force distribution, and consequently, stratified densification in compact together with high viscoelastic recovery. These factors collectively had contributed to the tablet capping. As extended dwell time assisted particle rearrangement and air escape, a denser and more homogenous packing in the die could be achieved. This occurred during the extended dwell time when a low precompression force was applied, followed by application of main compression force for strong interparticulate bond formation that was the most beneficial option to solve capping problem. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  14. Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Trouvé, Arnaud

    2004-09-01

    A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.

  15. Shock compression and release of a-axis magnesium single crystals: Anisotropy and time dependent inelastic response

    DOE PAGES

    Renganathan, P.; Winey, J. M.; Gupta, Y. M.

    2017-01-19

    Here, to gain insight into inelastic deformation mechanisms for shocked hexagonal close-packed (hcp) metals, particularly the role of crystal anisotropy, magnesium (Mg) single crystals were subjected to shock compression and release along the a-axis to 3.0 and 4.8 GPa elastic impact stresses. Wave profiles measured at several thicknesses, using laser interferometry, show a sharply peaked elastic wave followed by the plastic wave. Additionally, a smooth and featureless release wave is observed following peak compression. When compared to wave profiles measured previously for c-axis Mg, the elastic wave amplitudes for a-axis Mg are lower for the same propagation distance, and less attenuation of elastic wave amplitude is observed for a given peak stress. The featureless release wave for a-axis Mg is in marked contrast to the structured features observed for c-axis unloading. Numerical simulations, using a time-dependent anisotropic modeling framework, showed that the wave profiles calculated using prismatic slip or (10more » $$\\bar{1}$$2) twinning, individually, do not match the measured compression profiles for a-axis Mg. However, a combination of slip and twinning provides a good overall match to the measured compression profiles. In contrast to compression,prismatic slip alone provides a reasonable match to the measured release wave profiles; (10$$\\bar{1}$$2) twinning due to its uni-directionality is not activated during release. The experimental results and wave profile simulations for a-axis Mg presented here are quite different from the previously published c-axis results, demonstrating the important role of crystal anisotropy on the time-dependent inelastic deformation of Mg single crystals under shock compression and release.« less

  16. Shock compression and release of a-axis magnesium single crystals: Anisotropy and time dependent inelastic response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renganathan, P.; Winey, J. M.; Gupta, Y. M.

    Here, to gain insight into inelastic deformation mechanisms for shocked hexagonal close-packed (hcp) metals, particularly the role of crystal anisotropy, magnesium (Mg) single crystals were subjected to shock compression and release along the a-axis to 3.0 and 4.8 GPa elastic impact stresses. Wave profiles measured at several thicknesses, using laser interferometry, show a sharply peaked elastic wave followed by the plastic wave. Additionally, a smooth and featureless release wave is observed following peak compression. When compared to wave profiles measured previously for c-axis Mg, the elastic wave amplitudes for a-axis Mg are lower for the same propagation distance, and less attenuation of elastic wave amplitude is observed for a given peak stress. The featureless release wave for a-axis Mg is in marked contrast to the structured features observed for c-axis unloading. Numerical simulations, using a time-dependent anisotropic modeling framework, showed that the wave profiles calculated using prismatic slip or (10more » $$\\bar{1}$$2) twinning, individually, do not match the measured compression profiles for a-axis Mg. However, a combination of slip and twinning provides a good overall match to the measured compression profiles. In contrast to compression,prismatic slip alone provides a reasonable match to the measured release wave profiles; (10$$\\bar{1}$$2) twinning due to its uni-directionality is not activated during release. The experimental results and wave profile simulations for a-axis Mg presented here are quite different from the previously published c-axis results, demonstrating the important role of crystal anisotropy on the time-dependent inelastic deformation of Mg single crystals under shock compression and release.« less

  17. Velocity relaxation of a particle in a confined compressible fluid

    NASA Astrophysics Data System (ADS)

    Tatsumi, Rei; Yamamoto, Ryoichi

    2013-05-01

    The velocity relaxation of an impulsively forced spherical particle in a fluid confined by two parallel plane walls is studied using a direct numerical simulation approach. During the relaxation process, the momentum of the particle is transmitted in the ambient fluid by viscous diffusion and sound wave propagation, and the fluid flow accompanied by each mechanism has a different character and affects the particle motion differently. Because of the bounding walls, viscous diffusion is hampered, and the accompanying shear flow is gradually diminished. However, the sound wave is repeatedly reflected and spreads diffusely. As a result, the particle motion is governed by the sound wave and backtracks differently in a bulk fluid. The time when the backtracking of the particle occurs changes non-monotonically with respect to the compressibility factor ɛ = ν/ac and is minimized at the characteristic compressibility factor. This factor depends on the wall spacing, and the dependence is different at small and large wall spacing regions based on the different mechanisms causing the backtracking.

  18. Compressibility of the protein-water interface

    NASA Astrophysics Data System (ADS)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  19. Compressibility of the protein-water interface.

    PubMed

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  20. Magnetic compression laser driving circuit

    DOEpatents

    Ball, D.G.; Birx, D.; Cook, E.G.

    1993-01-05

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  1. Magnetic compression laser driving circuit

    DOEpatents

    Ball, Don G.; Birx, Dan; Cook, Edward G.

    1993-01-01

    A magnetic compression laser driving circuit is disclosed. The magnetic compression laser driving circuit compresses voltage pulses in the range of 1.5 microseconds at 20 Kilovolts of amplitude to pulses in the range of 40 nanoseconds and 60 Kilovolts of amplitude. The magnetic compression laser driving circuit includes a multi-stage magnetic switch where the last stage includes a switch having at least two turns which has larger saturated inductance with less core material so that the efficiency of the circuit and hence the laser is increased.

  2. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    PubMed

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  4. Micromechanics of composite laminate compression failure

    NASA Technical Reports Server (NTRS)

    Guynn, E. Gail; Bradley, Walter L.

    1986-01-01

    The Dugdale analysis for metals loaded in tension was adapted to model the failure of notched composite laminates loaded in compression. Compression testing details, MTS alignment verification, and equipment needs were resolved. Thus far, only 2 ductile material systems, HST7 and F155, were selected for study. A Wild M8 Zoom Stereomicroscope and necessary attachments for video taping and 35 mm pictures were purchased. Currently, this compression test system is fully operational. A specimen is loaded in compression, and load vs shear-crippling zone size is monitored and recorded. Data from initial compression tests indicate that the Dugdale model does not accurately predict the load vs damage zone size relationship of notched composite specimens loaded in compression.

  5. Formation of nanosecond SBS-compressed pulses for pumping an ultra-high power parametric amplifier

    NASA Astrophysics Data System (ADS)

    Kuz’min, A. A.; Kulagin, O. V.; Rodchenkov, V. I.

    2018-04-01

    Compression of pulsed Nd : glass laser radiation under stimulated Brillouin scattering (SBS) in perfluorooctane is investigated. Compression of 16-ns pulses at a beam diameter of 30 mm is implemented. The maximum compression coefficient is 28 in the optimal range of laser pulse energies from 2 to 4 J. The Stokes pulse power exceeds that of the initial laser pulse by a factor of about 11.5. The Stokes pulse jitter (fluctuations of the Stokes pulse exit time from the compressor) is studied. The rms spread of these fluctuations is found to be 0.85 ns.

  6. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach

    PubMed Central

    Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab

    2018-01-01

    Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B/K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance (CR=6 and PRD=1.88) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring. PMID:29337892

  7. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.

    PubMed

    Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab

    2018-01-16

    Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.

  8. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  9. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  10. A cascadable circular concentrator with parallel compressed structure for increasing the energy density

    NASA Astrophysics Data System (ADS)

    Ku, Nai-Lun; Chen, Yi-Yung; Hsieh, Wei-Che; Whang, Allen Jong-Woei

    2012-02-01

    Due to the energy crisis, the principle of green energy gains popularity. This leads the increasing interest in renewable energy such as solar energy. Thus, how to collect the sunlight for indoor illumination becomes our ultimate target. With the environmental awareness increasing, we use the nature light as the light source. Then we start to devote the development of solar collecting system. The Natural Light Guiding System includes three parts, collecting, transmitting and lighting part. The idea of our solar collecting system design is a concept for combining the buildings with a combination of collecting modules. Therefore, we can use it anyplace where the sunlight can directly impinges on buildings with collecting elements. In the meantime, while collecting the sunlight with high efficiency, we can transmit the sunlight into indoor through shorter distance zone by light pipe where we needs the light. We proposed a novel design including disk-type collective lens module. With the design, we can let the incident light and exit light be parallel and compressed. By the parallel and compressed design, we make every output light become compressed in the proposed optical structure. In this way, we can increase the ratio about light compression, get the better efficiency and let the energy distribution more uniform for indoor illumination. By the definition of "KPI" as an performance index about light density as following: lm/(mm)2, the simulation results show that the proposed Concentrator is 40,000,000 KPI much better than the 800,000 KPI measured from the traditional ones.

  11. Neural Net Gains Estimation Based on an Equivalent Model

    PubMed Central

    Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory

    2016-01-01

    A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system. PMID:27366146

  12. Neural Net Gains Estimation Based on an Equivalent Model.

    PubMed

    Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory

    2016-01-01

    A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system.

  13. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  14. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Costs Associated With Compressed Natural Gas Vehicle Fueling Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.; Gonzales, J.

    2014-09-01

    This document is designed to help fleets understand the cost factors associated with fueling infrastructure for compressed natural gas (CNG) vehicles. It provides estimated cost ranges for various sizes and types of CNG fueling stations and an overview of factors that contribute to the total cost of an installed station. The information presented is based on input from professionals in the natural gas industry who design, sell equipment for, and/or own and operate CNG stations.

  16. Magnitude and determinants of inadequate third-trimester weight gain in rural Bangladesh

    PubMed Central

    Hasan, S. M. Tafsir; Rahman, Sabuktagin; Locks, Lindsey Mina; Rahman, Mizanur; Hore, Samar Kumar; Saqeeb, Kazi Nazmus; Khan, Md. Alfazal

    2018-01-01

    Objectives The objective of this study was to estimate the magnitude and determinants of inadequate weight gain in the third-trimester among rural women in Matlab, Bangladesh. Methods The study analyzed data on weight gain in the third trimester in 1,883 pregnant women in Matlab, Bangladesh. All these women were admitted to Matlab hospital of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b) for childbirth during 2012–2014, and they had singleton live births at term. Data were retrieved from the electronic databases of Matlab Health and Demographic Surveillance System and Matlab hospital. A multivariable logistic regression for inadequate weight gain in the third trimester (≤4 kg) was built with sociodemographic, environmental and maternal factors as predictors. Results One thousand and twenty-six (54%) pregnant women had inadequate weight gain in the third trimester. In the multivariable model, short stature turned out to be the most robust risk factor for inadequate weight gain in the third trimester (OR = 2.5; 95% CI 1.8, 3.5 for short compared to tall women). Pre-third-trimester BMI was inversely associated with insufficient weight gain (OR = 0.96; 95% CI 0.93, 0.99 for 1 unit increase in BMI). Other risk factors for inadequate weight gain in the third trimester were advanced age (OR = 1.9; 95% CI 1.2, 3.1 for ≥35 years compared to ≤19 years), parity (OR = 1.5; 95% CI 1.2, 1.9 for multipara compared to nulliparous women), low socioeconomic status (OR = 1.7; 95% CI 1.2, 2.3 for women in the lowest compared to women in the highest wealth quintile), low level of education (OR = 1.6; 95% CI 1.2, 2.1 for ≤5 years compared to ≥10 years of education), belonging to the Hindu religious community (OR = 1.8; 95% CI 1.3, 2.5), consuming arsenic-contaminated water (OR = 1.4; 95% CI 1.1, 1.9), and conceiving during monsoon or dry season compared to summer (OR = 1.4; 95% CI 1.1, 1.8). Conclusions Among rural Bangladeshi women in Matlab

  17. Competitive Parallel Processing For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Antony R. H.

    1990-01-01

    Momentarily-best compression algorithm selected. Proposed competitive-parallel-processing system compresses data for transmission in channel of limited band-width. Likely application for compression lies in high-resolution, stereoscopic color-television broadcasting. Data from information-rich source like color-television camera compressed by several processors, each operating with different algorithm. Referee processor selects momentarily-best compressed output.

  18. Mechanical behaviour of Arabica coffee (Coffea arabica) beans under loading compression

    NASA Astrophysics Data System (ADS)

    Sigalingging, R.; Herak, D.; Kabutey, A.; Sigalingging, C.

    2018-02-01

    The uniformity of the product of the grinding process depends on various factors including the brittleness of the roasted coffee bean and it affects the extraction of soluble solids to obtain the coffee brew. Therefore, the reaching of a certain degree of brittleness is very important for the grinding to which coffee beans have to be subjected to before brewing. The aims of this study to show the mechanical behaviour of Arabica coffee beans from Tobasa (Indonesia) with roasted using different roasting time (40, 60 and 80 minutes at temperature 174 °C) under loading compression 225 kN. Universal compression testing machine was used with pressing vessel diameter 60 mm and compression speed 10 mm min-1 with different initial pressing height ranging from 20 to 60 mm. The results showed that significant correlation between roasting time and the brittleness.

  19. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    PubMed

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  20. Prediction of compression-induced image interpretability degradation

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Chen, Hua-Mei; Irvine, John M.; Wang, Zhonghai; Chen, Genshe; Nagy, James; Scott, Stephen

    2018-04-01

    Image compression is an important component in modern imaging systems as the volume of the raw data collected is increasing. To reduce the volume of data while collecting imagery useful for analysis, choosing the appropriate image compression method is desired. Lossless compression is able to preserve all the information, but it has limited reduction power. On the other hand, lossy compression, which may result in very high compression ratios, suffers from information loss. We model the compression-induced information loss in terms of the National Imagery Interpretability Rating Scale or NIIRS. NIIRS is a user-based quantification of image interpretability widely adopted by the Geographic Information System community. Specifically, we present the Compression Degradation Image Function Index (CoDIFI) framework that predicts the NIIRS degradation (i.e., a decrease of NIIRS level) for a given compression setting. The CoDIFI-NIIRS framework enables a user to broker the maximum compression setting while maintaining a specified NIIRS rating.

  1. Influence of Selected Factors on the Relationship between the Dynamic Elastic Modulus and Compressive Strength of Concrete

    PubMed Central

    Jurowski, Krystian; Grzeszczyk, Stefania

    2018-01-01

    In this paper, the relationship between the static and dynamic elastic modulus of concrete and the relationship between the static elastic modulus and compressive strength of concrete have been formulated. These relationships are based on investigations of different types of concrete and take into account the type and amount of aggregate and binder used. The dynamic elastic modulus of concrete was tested using impulse excitation of vibration and the modal analysis method. This method could be used as a non-destructive way of estimating the compressive strength of concrete. PMID:29565830

  2. Influence of Selected Factors on the Relationship between the Dynamic Elastic Modulus and Compressive Strength of Concrete.

    PubMed

    Jurowski, Krystian; Grzeszczyk, Stefania

    2018-03-22

    In this paper, the relationship between the static and dynamic elastic modulus of concrete and the relationship between the static elastic modulus and compressive strength of concrete have been formulated. These relationships are based on investigations of different types of concrete and take into account the type and amount of aggregate and binder used. The dynamic elastic modulus of concrete was tested using impulse excitation of vibration and the modal analysis method. This method could be used as a non-destructive way of estimating the compressive strength of concrete.

  3. Psychosocial factors and excessive gestational weight gain: The effect of parity in an Australian cohort.

    PubMed

    Hartley, Eliza; McPhie, Skye; Fuller-Tyszkiewicz, Matthew; Hill, Briony; Skouteris, Helen

    2016-01-01

    psychosocial variables can be protective or risk factors for excessive gestational weight gain (GWG). Parity has also been associated with GWG; however, its effect on psychosocial risk factors for GWG is yet to be determined. The aim of this study was to investigate if, and how, psychosocial factors vary in their impact on the GWG of primiparous and multiparous women. pregnant women were recruited in 2011 via study advertisements placed in hospitals, online, in parenting magazines, and at baby and children's markets, resulting in a sample of 256 women (113 primiparous, 143 multiparous). Participants completed questionnaires at 16-18 weeks' gestation and their pregravid BMI was recorded. Final weight before delivery was measured and used to calculate GWG. the findings revealed that primiparous women had significantly higher feelings of attractiveness (a facet of body attitude; p=0.01) than multiparous women. Hierarchical regressions revealed that in the overall sample, increased GWG was associated significantly with lower pre-pregnancy BMI (standardised coefficient β=-0.39, p<0.001), higher anxiety symptoms (β=0.25, p=0.004), and reduced self-efficacy to eat a healthy diet (β=-0.20, p=0.02). Although higher GWG was predicted significantly by decreased feelings of strength and fitness for primiparous women (β=-0.25, p=0.04) and higher anxiety was related significantly to greater GWG for multiparous women (β=0.43, p<0.001), statistical comparison of the model across the two groups suggested the magnitude of these effects did not differ across groups (p>0.05). the findings suggest that psychosocial screening and interventions by healthcare professionals may help to identify women who are at risk of excessive GWG, and there may be specific psychosocial factors that are more relevant for each parity group. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  5. Compressive sensing for single-shot two-dimensional coherent spectroscopy

    NASA Astrophysics Data System (ADS)

    Harel, E.; Spencer, A.; Spokoyny, B.

    2017-02-01

    In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.

  6. QLog Solar-Cell Mode Photodiode Logarithmic CMOS Pixel Using Charge Compression and Readout †

    PubMed Central

    Ni, Yang

    2018-01-01

    In this paper, we present a new logarithmic pixel design currently under development at New Imaging Technologies SA (NIT). This new logarithmic pixel design uses charge domain logarithmic signal compression and charge-transfer-based signal readout. This structure gives a linear response in low light conditions and logarithmic response in high light conditions. The charge transfer readout efficiently suppresses the reset (KTC) noise by using true correlated double sampling (CDS) in low light conditions. In high light conditions, thanks to charge domain logarithmic compression, it has been demonstrated that 3000 electrons should be enough to cover a 120 dB dynamic range with a mobile phone camera-like signal-to-noise ratio (SNR) over the whole dynamic range. This low electron count permits the use of ultra-small floating diffusion capacitance (sub-fF) without charge overflow. The resulting large conversion gain permits a single photon detection capability with a wide dynamic range without a complex sensor/system design. A first prototype sensor with 320 × 240 pixels has been implemented to validate this charge domain logarithmic pixel concept and modeling. The first experimental results validate the logarithmic charge compression theory and the low readout noise due to the charge-transfer-based readout. PMID:29443903

  7. Compressing Spin-Polarized 3He With a Modified Diaphragm Pump

    PubMed Central

    Gentile, T. R.; Rich, D. R.; Thompson, A. K.; Snow, W. M.; Jones, G. L.

    2001-01-01

    Nuclear spin-polarized 3He gas at pressures on the order of 100 kPa (1 bar) are required for several applications, such as neutron spin filters and magnetic resonance imaging. The metastability-exchange optical pumping (MEOP) method for polarizing 3He gas can rapidly produce highly polarized gas, but the best results are obtained at much lower pressure (~0.1 kPa). We describe a compact compression apparatus for polarized gas that is based on a modified commercial diaphragm pump. The gas is polarized by MEOP at a typical pressure of 0.25 kPa (2.5 mbar), and compressed into a storage cell at a typical pressure of 100 kPa. In the storage cell, we have obtained 20 % to 35 % 3He polarization using pure 3He gas and 35 % to 50 % 3He polarization using 3He-4He mixtures. By maintaining the storage cell at liquid nitrogen temperature during compression, the density has been increased by a factor of four. PMID:27500044

  8. Rapid-Rate Compression Testing of Sheet Materials at High Temperatures

    NASA Technical Reports Server (NTRS)

    Bernett, E. C.; Gerberich, W. W.

    1961-01-01

    This Report describes the test equipment that was developed and the procedures that were used to evaluate structural sheet-material compression properties at preselected constant strain rates and/or loads. Electrical self-resistance was used to achieve a rapid heating rate of 200 F/sec. Four materials were tested at maximum temperatures which ranged from 600 F for the aluminum alloy to 2000 F for the Ni-Cr-Co iron-base alloy. Tests at 0.1, 0.001, and 0.00001 in./in./sec showed that strain rate has a major effect on the measured strength, especially at the high temperatures. The tests, under conditions of constant temperature and constant compression stress, showed that creep deformation can be a critical factor even when the time involved is on the order of a few seconds or less. The theoretical and practical aspects of rapid-rate compression testing are presented, and suggestions are made regarding possible modifications of the equipment which would improve the over-all capabilities.

  9. Acculturation and gestational weight gain in a predominantly Puerto Rican population.

    PubMed

    Tovar, Alison; Chasan-Taber, Lisa; Bermudez, Odilia I; Hyatt, Raymond R; Must, Aviva

    2012-11-21

    Identifying risk factors that affect excess weight gain during pregnancy is critical, especially among women who are at a higher risk for obesity. The goal of this study was to determine if acculturation, a possible risk factor, was associated with gestational weight gain in a predominantly Puerto Rican population. We utilized data from Proyecto Buena Salud, a prospective cohort study of Hispanic women in Western Massachusetts, United States. Height, weight and gestational age were abstracted from medical records among participants with full-term pregnancies (n=952). Gestational weight gain was calculated as the difference between delivery and prepregnancy weight. Acculturation (measured via a psychological acculturation scale, generation in the US, place of birth and spoken language preference) was assessed in early pregnancy. Adjusting for age, parity, perceived stress, gestational age, and prepregnancy weight, women who had at least one parent born in Puerto Rico/Dominican Republic (PR/DR) and both grandparents born in PR/DR had a significantly higher mean total gestational weight gain (0.9 kg for at least one parent born in PR/DR and 2.2 kg for grandparents born in PR/DR) and rate of weight gain (0.03 kg/wk for at least one parent born in PR/DR and 0.06 kg/wk for grandparents born in PR/DR) vs. women who were of PR/DR born. Similarly, women born in the US had significantly higher mean total gestational weight gain (1.0 kg) and rate of weight gain (0.03 kg/wk) vs. women who were PR/ DR born. Spoken language preference and psychological acculturation were not significantly associated with total or rate of pregnancy weight gain. We found that psychological acculturation was not associated with gestational weight gain while place of birth and higher generation in the US were significantly associated with higher gestational weight gain. We interpret these findings to suggest the potential importance of the US "obesogenic" environment in influencing unhealthy

  10. Edemagenic gain and interstitial fluid volume regulation.

    PubMed

    Dongaonkar, R M; Quick, C M; Stewart, R H; Drake, R E; Cox, C S; Laine, G A

    2008-02-01

    Under physiological conditions, interstitial fluid volume is tightly regulated by balancing microvascular filtration and lymphatic return to the central venous circulation. Even though microvascular filtration and lymphatic return are governed by conservation of mass, their interaction can result in exceedingly complex behavior. Without making simplifying assumptions, investigators must solve the fluid balance equations numerically, which limits the generality of the results. We thus made critical simplifying assumptions to develop a simple solution to the standard fluid balance equations that is expressed as an algebraic formula. Using a classical approach to describe systems with negative feedback, we formulated our solution as a "gain" relating the change in interstitial fluid volume to a change in effective microvascular driving pressure. The resulting "edemagenic gain" is a function of microvascular filtration coefficient (K(f)), effective lymphatic resistance (R(L)), and interstitial compliance (C). This formulation suggests two types of gain: "multivariate" dependent on C, R(L), and K(f), and "compliance-dominated" approximately equal to C. The latter forms a basis of a novel method to estimate C without measuring interstitial fluid pressure. Data from ovine experiments illustrate how edemagenic gain is altered with pulmonary edema induced by venous hypertension, histamine, and endotoxin. Reformulation of the classical equations governing fluid balance in terms of edemagenic gain thus yields new insight into the factors affecting an organ's susceptibility to edema.

  11. Differential Motion and Compression Between the Plantaris and Achilles Tendons: A Contributing Factor to Midportion Achilles Tendinopathy?

    PubMed

    Stephen, Joanna M; Marsland, Daniel; Masci, Lorenzo; Calder, James D F; Daou, Hadi El

    2018-03-01

    motion as compared with the AT. Tendon compression was elevated in terminal plantarflexion, suggesting that adapting rehabilitation tendon-loading programs to avoid this position may be beneficial. The insertion pattern of the PT may be a factor in plantaris-related midportion Achilles tendinopathy. Terminal range plantarflexion and hindfoot valgus both increased AT and PT compression, suggesting that these should be avoided in this patient population.

  12. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  13. Data compression for sequencing data

    PubMed Central

    2013-01-01

    Post-Sanger sequencing methods produce tons of data, and there is a general agreement that the challenge to store and process them must be addressed with data compression. In this review we first answer the question “why compression” in a quantitative manner. Then we also answer the questions “what” and “how”, by sketching the fundamental compression ideas, describing the main sequencing data types and formats, and comparing the specialized compression algorithms and tools. Finally, we go back to the question “why compression” and give other, perhaps surprising answers, demonstrating the pervasiveness of data compression techniques in computational biology. PMID:24252160

  14. Effect of interfragmentary gap on compression force in a headless compression screw used for scaphoid fixation.

    PubMed

    Tan, E S; Mat Jais, I S; Abdul Rahim, S; Tay, S C

    2018-01-01

    We investigated the effect of an interfragmentary gap on the final compression force using the Acutrak 2 Mini headless compression screw (length 26 mm) (Acumed, Hillsboro, OR, USA). Two blocks of solid rigid polyurethane foam in a custom jig were separated by spacers of varying thickness (1.0, 1.5, 2.0 and 2.5 mm) to simulate an interfragmentary gap. The spacers were removed before full insertion of the screw and the compression force was measured when the screw was buried 2 mm below the surface of the upper block. Gaps of 1.5 mm and 2.0 mm resulted in significantly decreased compression forces, whereas there was no significant decrease in compression force with a gap of 1 mm. An interfragmentary gap of 2.5 mm did not result in any contact between blocks. We conclude that an increased interfragmentary gap leads to decreased compression force with this screw, which may have implications on fracture healing.

  15. Analysis of the Optimum Usage of Slag for the Compressive Strength of Concrete.

    PubMed

    Lee, Han-Seung; Wang, Xiao-Yong; Zhang, Li-Na; Koh, Kyung-Taek

    2015-03-18

    Ground granulated blast furnace slag is widely used as a mineral admixture to replace partial Portland cement in the concrete industry. As the amount of slag increases, the late-age compressive strength of concrete mixtures increases. However, after an optimum point, any further increase in slag does not improve the late-age compressive strength. This optimum replacement ratio of slag is a crucial factor for its efficient use in the concrete industry. This paper proposes a numerical procedure to analyze the optimum usage of slag for the compressive strength of concrete. This numerical procedure starts with a blended hydration model that simulates cement hydration, slag reaction, and interactions between cement hydration and slag reaction. The amount of calcium silicate hydrate (CSH) is calculated considering the contributions from cement hydration and slag reaction. Then, by using the CSH contents, the compressive strength of the slag-blended concrete is evaluated. Finally, based on the parameter analysis of the compressive strength development of concrete with different slag inclusions, the optimum usage of slag in concrete mixtures is determined to be approximately 40% of the total binder content. The proposed model is verified through experimental results of the compressive strength of slag-blended concrete with different water-to-binder ratios and different slag inclusions.

  16. Analysis of the Optimum Usage of Slag for the Compressive Strength of Concrete

    PubMed Central

    Lee, Han-Seung; Wang, Xiao-Yong; Zhang, Li-Na; Koh, Kyung-Taek

    2015-01-01

    Ground granulated blast furnace slag is widely used as a mineral admixture to replace partial Portland cement in the concrete industry. As the amount of slag increases, the late-age compressive strength of concrete mixtures increases. However, after an optimum point, any further increase in slag does not improve the late-age compressive strength. This optimum replacement ratio of slag is a crucial factor for its efficient use in the concrete industry. This paper proposes a numerical procedure to analyze the optimum usage of slag for the compressive strength of concrete. This numerical procedure starts with a blended hydration model that simulates cement hydration, slag reaction, and interactions between cement hydration and slag reaction. The amount of calcium silicate hydrate (CSH) is calculated considering the contributions from cement hydration and slag reaction. Then, by using the CSH contents, the compressive strength of the slag-blended concrete is evaluated. Finally, based on the parameter analysis of the compressive strength development of concrete with different slag inclusions, the optimum usage of slag in concrete mixtures is determined to be approximately 40% of the total binder content. The proposed model is verified through experimental results of the compressive strength of slag-blended concrete with different water-to-binder ratios and different slag inclusions. PMID:28787998

  17. Simulation of breast compression in mammography using finite element analysis: A preliminary study

    NASA Astrophysics Data System (ADS)

    Liu, Yan-Lin; Liu, Pei-Yuan; Huang, Mei-Lan; Hsu, Jui-Ting; Han, Ruo-Ping; Wu, Jay

    2017-11-01

    Adequate compression during mammography lowers the absorbed dose in the breast and improves the image quality. The compressed breast thickness (CBT) is affected by various factors, such as breast volume, glandularity, and compression force. In this study, we used the finite element analysis to simulate breast compression and deformation and validated the simulated CBT with clinical mammography results. Image data from ten subjects who had undergone mammography screening and breast magnetic resonance imaging (MRI) were collected, and their breast models were created according to the MR images. The non-linear tissue deformation under 10-16 daN in the cranial-caudal direction was simulated. When the clinical compression force was used, the simulated CBT ranged from 2.34 to 5.90 cm. The absolute difference between the simulated CBT and the clinically measured CBT ranged from 0.5 to 7.1 mm. The simulated CBT had a strong positive linear relationship to breast volume and a weak negative correlation to glandularity. The average simulated CBT under 10, 12, 14, and 16 daN was 5.68, 5.12, 4.67, and 4.25 cm, respectively. Through this study, the relationships between CBT, breast volume, glandularity, and compression force are provided for use in clinical mammography.

  18. Cost-effectiveness analysis of treatments for vertebral compression fractures.

    PubMed

    Edidin, Avram A; Ong, Kevin L; Lau, Edmund; Schmier, Jordana K; Kemner, Jason E; Kurtz, Steven M

    2012-07-01

    Vertebral compression fractures (VCFs) can be treated by nonsurgical management or by minimally invasive surgical treatment including vertebroplasty and balloon kyphoplasty. The purpose of the present study was to characterize the cost to Medicare for treating VCF-diagnosed patients by nonsurgical management, vertebroplasty, or kyphoplasty. We hypothesized that surgical treatments for VCFs using vertebroplasty or kyphoplasty would be a cost-effective alternative to nonsurgical management for the Medicare patient population. Cost per life-year gained for VCF patients in the US Medicare population was compared between operated (kyphoplasty and vertebroplasty) and non-operated patients and between kyphoplasty and vertebroplasty patients, all as a function of patient age and gender. Life expectancy was estimated using a parametric Weibull survival model (adjusted for comorbidities) for 858 978 VCF patients in the 100% Medicare dataset (2005-2008). Median payer costs were identified for each treatment group for up to 3 years following VCF diagnosis, based on 67 018 VCF patients in the 5% Medicare dataset (2005-2008). A discount rate of 3% was used for the base case in the cost-effectiveness analysis, with 0% and 5% discount rates used in sensitivity analyses. After accounting for the differences in median costs and using a discount rate of 3%, the cost per life-year gained for kyphoplasty and vertebroplasty patients ranged from $US1863 to $US6687 and from $US2452 to $US13 543, respectively, compared with non-operated patients. The cost per life-year gained for kyphoplasty compared with vertebroplasty ranged from -$US4878 (cost saving) to $US2763. Among patients for whom surgical treatment was indicated, kyphoplasty was found to be cost effective, and perhaps even cost saving, compared with vertebroplasty. Even for the oldest patients (85 years of age and older), both interventions would be considered cost effective in terms of cost per life-year gained.

  19. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  20. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  1. Impact of Various Compression Ratio on the Compression Ignition Engine with Diesel and Jatropha Biodiesel

    NASA Astrophysics Data System (ADS)

    Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.

    2017-03-01

    The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.

  2. Compressive sensing for efficient health monitoring and effective damage detection of structures

    NASA Astrophysics Data System (ADS)

    Jayawardhana, Madhuka; Zhu, Xinqun; Liyanapathirana, Ranjith; Gunawardana, Upul

    2017-02-01

    Real world Structural Health Monitoring (SHM) systems consist of sensors in the scale of hundreds, each sensor generating extremely large amounts of data, often arousing the issue of the cost associated with data transfer and storage. Sensor energy is a major component included in this cost factor, especially in Wireless Sensor Networks (WSN). Data compression is one of the techniques that is being explored to mitigate the effects of these issues. In contrast to traditional data compression techniques, Compressive Sensing (CS) - a very recent development - introduces the means of accurately reproducing a signal by acquiring much less number of samples than that defined by Nyquist's theorem. CS achieves this task by exploiting the sparsity of the signal. By the reduced amount of data samples, CS may help reduce the energy consumption and storage costs associated with SHM systems. This paper investigates CS based data acquisition in SHM, in particular, the implications of CS on damage detection and localization. CS is implemented in a simulation environment to compress structural response data from a Reinforced Concrete (RC) structure. Promising results were obtained from the compressed data reconstruction process as well as the subsequent damage identification process using the reconstructed data. A reconstruction accuracy of 99% could be achieved at a Compression Ratio (CR) of 2.48 using the experimental data. Further analysis using the reconstructed signals provided accurate damage detection and localization results using two damage detection algorithms, showing that CS has not compromised the crucial information on structural damages during the compression process.

  3. Vestibulo-ocular reflex gain values in the suppression head impulse test of healthy subjects.

    PubMed

    Rey-Martinez, Jorge; Thomas-Arrizabalaga, Izaskun; Espinosa-Sanchez, Juan Manuel; Batuecas-Caletrio, Angel; Trinidad-Ruiz, Gabriel; Matiño-Soler, Eusebi; Perez-Fernandez, Nicolas

    2018-02-15

    To assess whether there are differences in vestibulo-ocular reflex (VOR) gain for suppression head impulse (SHIMP) and head impulse (HIMP) video head impulse test paradigms, and if so, what are their causes. Prospective multicenter observational double-blind nonrandomized clinical study was performed by collecting 80 healthy subjects from four reference hospitals. SHIMP data was postprocessed to eliminate impulses in which early SHIMP saccades were detected. Differences between HIMP and SHIMP VOR gain values were statistically evaluated. Head impulse maximum velocity, gender, age, direction of impulse, and hospital center were considered as possible influential factors. A small significant statistical difference between HIMP and SHIMP VOR gain values was found on repeated measures analysis of variance (-0.05 ± 0.006, P < 0.001). Optimized linear model showed a significant influence of age variable on the observed differences for HIMP and SHIMP gain values and did not find influence between gain values differences and maximum head impulse velocity. Both HIMP and SHIMP VOR gain values were significant lower (-0.09, P < 0.001) when the impulses were performed to the left side. We had observed a difference in SHIMP and HIMP gain values not adequately explained by known gain modification factors. The persistence of this slight but significant difference indicates that there are more factors causing lower SHIMP VOR gain values. This difference must to be considered in further studies as well as in the clinical SHIMP testing protocols. We hypothesized that VOR phasic response inhibition could be the underlying cause of this difference. IIb. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  4. The association between patient participation and functional gain following inpatient rehabilitation.

    PubMed

    Morghen, Sara; Morandi, Alessandro; Guccione, Andrew A; Bozzini, Michela; Guerini, Fabio; Gatti, Roberto; Del Santo, Francesco; Gentile, Simona; Trabucchi, Marco; Bellelli, Giuseppe

    2017-08-01

    To evaluate patients' participation during physical therapy sessions as assessed with the Pittsburgh rehabilitation participation scale (PRPS) as a possible predictor of functional gain after rehabilitation training. All patients aged 65 years or older consecutively admitted to a Department of Rehabilitation and Aged Care (DRAC) were evaluated on admission regarding their health, nutritional, functional and cognitive status. Functional status was assessed with the functional independence measure (FIM) on admission and at discharge. Participation during rehabilitation sessions was measured with the PRPS. Functional gain was evaluated using the Montebello rehabilitation factor score (MRFS efficacy), and patients stratified in two groups according to their level of functional gain and their sociodemographic, clinical and functional characteristics were compared. Predictors of poor functional gain were evaluated using a multivariable logistic regression model adjusted for confounding factors. A total of 556 subjects were included in this study. Patients with poor functional gain at discharge demonstrated lower participation during physical therapy sessions were significantly older, more cognitively and functionally impaired on admission, more depressed, more comorbid, and more frequently admitted for cardiac disease or immobility syndrome than their counterparts. There was a significant linear association between PRPS scores and MRFS efficacy. In a multivariable logistic regression model, participation was independently associated with functional gain at discharge (odds ratio 1.51, 95 % confidence interval 1.19-1.91). This study showed that participation during physical therapy affects the extent of functional gain at discharge in a large population of older patients with multiple diseases receiving in-hospital rehabilitation.

  5. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  6. Clinically significant weight gain 1 year after occupational back injury.

    PubMed

    Keeney, Benjamin J; Fulton-Kehoe, Deborah; Wickizer, Thomas M; Turner, Judith A; Chan, Kwun Chuen Gary; Franklin, Gary M

    2013-03-01

    To examine the incidence of clinically significant weight gain 1 year after occupational back injury, and risk factors for that gain. A cohort of Washington State workers with wage-replacement benefits for back injuries completed baseline and 1-year follow-up telephone interviews. We obtained additional measures from claims and medical records. Among 1263 workers, 174 (13.8%) reported clinically significant weight gain (≥7%) 1 year after occupational back injury. Women and workers who had more than 180 days on wage replacement at 1 year were twice as likely (adjusted odds ratio = 2.17, 95% confidence interval = 1.54 to 3.07; adjusted odds ratio = 2.40, 95% confidence interval = 1.63 to 3.53, respectively; both P < 0.001) to have clinically significant weight gain. Women and workers on wage replacement for more than 180 days may be susceptible to clinically significant weight gain after occupational back injury.

  7. Clinically Significant Weight Gain One Year After Occupational Back Injury

    PubMed Central

    Keeney, Benjamin J.; Fulton-Kehoe, Deborah; Wickizer, Thomas M.; Turner, Judith A.; Chan, Kwun Chuen Gary; Franklin, Gary M.

    2014-01-01

    Objective To examine the incidence of clinically significant weight gain one year after occupational back injury, and risk factors for that gain. Methods A cohort of Washington State workers with wage-replacement benefits for back injuries completed baseline and 1-year follow-up telephone interviews. We obtained additional measures from claims and medical records. Results Among 1,263 workers, 174 (13.8%) reported clinically significant weight gain (≥7%) 1 year after occupational back injury. Women and workers who had >180 days on wage replacement at 1 year were twice as likely (adjusted OR=2.17, 95% CI=1.54–3.07; adjusted OR=2.40, 95% CI=1.63–3.53, respectively; both P<0.001) to have clinically significant weight gain. Conclusions Women and workers on wage replacement >180 days may be susceptible to clinically significant weight gain following occupational back injury. PMID:23247606

  8. Compressible Flow Toolbox

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2006-01-01

    The Compressible Flow Toolbox is primarily a MATLAB-language implementation of a set of algorithms that solve approximately 280 linear and nonlinear classical equations for compressible flow. The toolbox is useful for analysis of one-dimensional steady flow with either constant entropy, friction, heat transfer, or Mach number greater than 1. The toolbox also contains algorithms for comparing and validating the equation-solving algorithms against solutions previously published in open literature. The classical equations solved by the Compressible Flow Toolbox are as follows: The isentropic-flow equations, The Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), The Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section), The normal-shock equations, The oblique-shock equations, and The expansion equations.

  9. Compressed Sensing for Chemistry

    NASA Astrophysics Data System (ADS)

    Sanders, Jacob Nathan

    Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The

  10. Simulations of in situ x-ray diffraction from uniaxially compressed highly textured polycrystalline targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGonegle, David, E-mail: d.mcgonegle1@physics.ox.ac.uk; Wark, Justin S.; Higginbotham, Andrew

    2015-08-14

    A growing number of shock compression experiments, especially those involving laser compression, are taking advantage of in situ x-ray diffraction as a tool to interrogate structure and microstructure evolution. Although these experiments are becoming increasingly sophisticated, there has been little work on exploiting the textured nature of polycrystalline targets to gain information on sample response. Here, we describe how to generate simulated x-ray diffraction patterns from materials with an arbitrary texture function subject to a general deformation gradient. We will present simulations of Debye-Scherrer x-ray diffraction from highly textured polycrystalline targets that have been subjected to uniaxial compression, as maymore » occur under planar shock conditions. In particular, we study samples with a fibre texture, and find that the azimuthal dependence of the diffraction patterns contains information that, in principle, affords discrimination between a number of similar shock-deformation mechanisms. For certain cases, we compare our method with results obtained by taking the Fourier transform of the atomic positions calculated by classical molecular dynamics simulations. Illustrative results are presented for the shock-induced α–ϵ phase transition in iron, the α–ω transition in titanium and deformation due to twinning in tantalum that is initially preferentially textured along [001] and [011]. The simulations are relevant to experiments that can now be performed using 4th generation light sources, where single-shot x-ray diffraction patterns from crystals compressed via laser-ablation can be obtained on timescales shorter than a phonon period.« less

  11. Simulations of in situ x-ray diffraction from uniaxially compressed highly textured polycrystalline targets

    DOE PAGES

    McGonegle, David; Milathianaki, Despina; Remington, Bruce A.; ...

    2015-08-11

    A growing number of shock compression experiments, especially those involving laser compression, are taking advantage of in situ x-ray diffraction as a tool to interrogate structure and microstructure evolution. Although these experiments are becoming increasingly sophisticated, there has been little work on exploiting the textured nature of polycrystalline targets to gain information on sample response. Here, we describe how to generate simulated x-ray diffraction patterns from materials with an arbitrary texture function subject to a general deformation gradient. We will present simulations of Debye-Scherrer x-ray diffraction from highly textured polycrystalline targets that have been subjected to uniaxial compression, as maymore » occur under planar shock conditions. In particular, we study samples with a fibre texture, and find that the azimuthal dependence of the diffraction patterns contains information that, in principle, affords discrimination between a number of similar shock-deformation mechanisms. For certain cases, we compare our method with results obtained by taking the Fourier transform of the atomic positions calculated by classical molecular dynamics simulations. Illustrative results are presented for the shock-induced α–ϵ phase transition in iron, the α–ω transition in titanium and deformation due to twinning in tantalum that is initially preferentially textured along [001] and [011]. In conclusion, the simulations are relevant to experiments that can now be performed using 4th generation light sources, where single-shot x-ray diffraction patterns from crystals compressed via laser-ablation can be obtained on timescales shorter than a phonon period.« less

  12. Compressed sensing approach for wrist vein biometrics.

    PubMed

    Lantsov, Aleksey; Ryabko, Maxim; Shchekin, Aleksey

    2018-04-01

    The work describes features of the compressed sensing (CS) approach utilized for development of a wearable system for wrist vein recognition with single-pixel detection; we consider this system useful for biometrics authentication purposes. The CS approach implies use of a spatial light modulation (SLM) which, in our case, can be performed differently-with a liquid crystal display or diffusely scattering medium. We show that compressed sensing combined with above-mentioned means of SLM allows us to avoid using an optical system-a limiting factor for wearable devices. The trade-off between the 2 different SLM approaches regarding issues of practical implementation of CS approach for wrist vein recognition purposes is discussed. A possible solution of a misalignment problem-a typical issue for imaging systems based upon 2D arrays of photodiodes-is also proposed. Proposed design of the wearable device for wrist vein recognition is based upon single-pixel detection. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  14. Testing compression strength of wood logs by drilling resistance

    NASA Astrophysics Data System (ADS)

    Kalny, Gerda; Rados, Kristijan; Rauch, Hans Peter

    2017-04-01

    Soil bioengineering is a construction technique using biological components for hydraulic and civil engineering solutions, based on the application of living plants and other auxiliary materials including among others log wood. Considering the reliability of the construction it is important to know about the durability and the degradation process of the wooden logs to estimate and retain the integral performance of a soil bioengineering system. An important performance indicator is the compression strength, but this parameter is not easy to examine by non-destructive methods. The Rinntech Resistograph is an instrument to measure the drilling resistance by a 3 mm wide needle in a wooden log. It is a quasi-non-destructive method as the remaining hole has no weakening effects to the wood. This is an easy procedure but result in values, hard to interpret. To assign drilling resistance values to specific compression strengths, wooden specimens were tested in an experiment and analysed with the Resistograph. Afterwards compression tests were done at the same specimens. This should allow an easier interpretation of drilling resistance curves in future. For detailed analyses specimens were investigated by means of branch inclusions, cracks and distances between annual rings. Wood specimens are tested perpendicular to the grain. First results show a correlation between drilling resistance and compression strength by using the mean drilling resistance, average width of the annual rings and the mean range of the minima and maxima values as factors for the drilling resistance. The extended limit of proportionality, the offset yield strength and the maximum strength were taken as parameters for compression strength. Further investigations at a second point in time strengthen these results.

  15. Compressed air injection technique to standardize block injection pressures.

    PubMed

    Tsui, Ban C H; Li, Lisa X Y; Pillay, Jennifer J

    2006-11-01

    Presently, no standardized technique exists to monitor injection pressures during peripheral nerve blocks. Our objective was to determine if a compressed air injection technique, using an in vitro model based on Boyle's law and typical regional anesthesia equipment, could consistently maintain injection pressures below a 1293 mmHg level associated with clinically significant nerve injury. Injection pressures for 20 and 30 mL syringes with various needle sizes (18G, 20G, 21G, 22G, and 24G) were measured in a closed system. A set volume of air was aspirated into a saline-filled syringe and then compressed and maintained at various percentages while pressure was measured. The needle was inserted into the injection port of a pressure sensor, which had attached extension tubing with an injection plug clamped "off". Using linear regression with all data points, the pressure value and 99% confidence interval (CI) at 50% air compression was estimated. The linearity of Boyle's law was demonstrated with a high correlation, r = 0.99, and a slope of 0.984 (99% CI: 0.967-1.001). The net pressure generated at 50% compression was estimated as 744.8 mmHg, with the 99% CI between 729.6 and 760.0 mmHg. The various syringe/needle combinations had similar results. By creating and maintaining syringe air compression at 50% or less, injection pressures will be substantially below the 1293 mmHg threshold considered to be an associated risk factor for clinically significant nerve injury. This technique may allow simple, real-time and objective monitoring during local anesthetic injections while inherently reducing injection speed.

  16. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  17. Optimization of the segmented method for optical compression and multiplexing system

    NASA Astrophysics Data System (ADS)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  18. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  19. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  20. 30 CFR 77.412 - Compressed air systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  1. Onboard Data Compression of Synthetic Aperture Radar Data: Status and Prospects

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Moision, Bruce

    2008-01-01

    Synthetic aperture radar (SAR) instruments on spacecraft are capable of producing huge quantities of data. Onboard lossy data compression is commonly used to reduce the burden on the communication link. In this paper an overview is given of various SAR data compression techniques, along with an assessment of how much improvement is possible (and practical) and how to approach the problem of obtaining it. Synthetic aperture radar (SAR) instruments on spacecraft are capable of acquiring huge quantities of data. As a result, the available downlink rate and onboard storage capacity can be limiting factors in mission design for spacecraft with SAR instruments. This is true both for Earth-orbiting missions and missions to more distant targets such as Venus, Titan, and Europa. (Of course for missions beyond Earth orbit downlink rates are much lower and thus potentially much more limiting.) Typically spacecraft with SAR instruments use some form of data compression in order to reduce the storage size and/or downlink rate necessary to accommodate the SAR data. Our aim here is to give an overview of SAR data compression strategies that have been considered, and to assess the prospects for additional improvements.

  2. Damage development under compression-compression fatigue loading in a stitched uniwoven graphite/epoxy composite material

    NASA Technical Reports Server (NTRS)

    Vandermey, Nancy E.; Morris, Don H.; Masters, John E.

    1991-01-01

    Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.

  3. Efficient compression of molecular dynamics trajectory files.

    PubMed

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James

    2012-10-15

    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases. Copyright © 2012 Wiley Periodicals, Inc.

  4. Vertebral Augmentation Involving Vertebroplasty or Kyphoplasty for Cancer-Related Vertebral Compression Fractures: An Economic Analysis.

    PubMed

    2016-01-01

    Untreated vertebral compression fractures can have serious clinical consequences and impose a considerable impact on patients' quality of life and on caregivers. Since non-surgical management of these fractures has limited effectiveness, vertebral augmentation procedures are gaining acceptance in clinical practice for pain control and fracture stabilization. The objective of this analysis was to determine the cost-effectiveness and budgetary impact of kyphoplasty or vertebroplasty compared with non-surgical management for the treatment of vertebral compression fractures in patients with cancer. We performed a systematic review of health economic studies to identify relevant studies that compare the cost-effectiveness of kyphoplasty or vertebroplasty with non-surgical management for the treatment of vertebral compression fractures in adults with cancer. We also performed a primary cost-effectiveness analysis to assess the clinical benefits and costs of kyphoplasty or vertebroplasty compared with non-surgical management in the same population. We developed a Markov model to forecast benefits and harms of treatments, and corresponding quality-adjusted life years and costs. Clinical data and utility data were derived from published sources, while costing data were derived using Ontario administrative sources. We performed sensitivity analyses to examine the robustness of the results. In addition, a 1-year budget impact analysis was performed using data from Ontario administrative sources. Two scenarios were explored: (a) an increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario, maintaining the current proportion of kyphoplasty versus vertebroplasty; and (b) no increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario but an increase in the proportion of kyphoplasties versus vertebroplasties. The base case considered each of kyphoplasty and vertebroplasty

  5. Vertebral Augmentation Involving Vertebroplasty or Kyphoplasty for Cancer-Related Vertebral Compression Fractures: An Economic Analysis

    PubMed Central

    2016-01-01

    Background Untreated vertebral compression fractures can have serious clinical consequences and impose a considerable impact on patients' quality of life and on caregivers. Since non-surgical management of these fractures has limited effectiveness, vertebral augmentation procedures are gaining acceptance in clinical practice for pain control and fracture stabilization. The objective of this analysis was to determine the cost-effectiveness and budgetary impact of kyphoplasty or vertebroplasty compared with non-surgical management for the treatment of vertebral compression fractures in patients with cancer. Methods We performed a systematic review of health economic studies to identify relevant studies that compare the cost-effectiveness of kyphoplasty or vertebroplasty with non-surgical management for the treatment of vertebral compression fractures in adults with cancer. We also performed a primary cost-effectiveness analysis to assess the clinical benefits and costs of kyphoplasty or vertebroplasty compared with non-surgical management in the same population. We developed a Markov model to forecast benefits and harms of treatments, and corresponding quality-adjusted life years and costs. Clinical data and utility data were derived from published sources, while costing data were derived using Ontario administrative sources. We performed sensitivity analyses to examine the robustness of the results. In addition, a 1-year budget impact analysis was performed using data from Ontario administrative sources. Two scenarios were explored: (a) an increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario, maintaining the current proportion of kyphoplasty versus vertebroplasty; and (b) no increase in the total number of vertebral augmentation procedures performed among patients with cancer in Ontario but an increase in the proportion of kyphoplasties versus vertebroplasties. Results The base case considered each of

  6. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    PubMed

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. MHD simulation of plasma compression experiments

    NASA Astrophysics Data System (ADS)

    Reynolds, Meritt; Barsky, Sandra; de Vietien, Peter

    2017-10-01

    General Fusion (GF) is working to build a magnetized target fusion (MTF) power plant based on compression of magnetically-confined plasma by liquid metal. GF is testing this compression concept by collapsing solid aluminum liners onto plasmas formed by coaxial helicity injection in a series of experiments called PCS (Plasma Compression, Small). We simulate the PCS experiments using the finite-volume MHD code VAC. The single-fluid plasma model includes temperature-dependent resistivity and anisotropic heat transport. The time-dependent curvilinear mesh for MHD simulation is derived from LS-DYNA simulations of actual field tests of liner implosion. We will discuss how 3D simulations reproduced instability observed in the PCS13 experiment and correctly predicted stabilization of PCS14 by ramping the shaft current during compression. We will also present a comparison of simulated Mirnov and x-ray diagnostics with experimental measurements indicating that PCS14 compressed well to a linear compression ratio of 2.5:1.

  8. Determinants of Weight Gain during the First Two Years of Life—The GECKO Drenthe Birth Cohort

    PubMed Central

    Küpers, Leanne K.; L’Abée, Carianne; Bocca, Gianni; Stolk, Ronald P.; Sauer, Pieter J. J.; Corpeleijn, Eva

    2015-01-01

    Objectives To explain weight gain patterns in the first two years of life, we compared the predictive values of potential risk factors individually and within four different domains: prenatal, nutrition, lifestyle and socioeconomic factors. Methods In a Dutch population-based birth cohort, length and weight were measured in 2475 infants at 1, 6, 12 and 24 months. Factors that might influence weight gain (e.g. birth weight, parental BMI, breastfeeding, hours of sleep and maternal education) were retrieved from health care files and parental questionnaires. Factors were compared with linear regression to best explain differences in weight gain, defined as changes in Z-score of weight-for-age and weight-for-length over 1–6, 6–12 and 12–24 months. In a two-step approach, factors were first studied individually for their association with growth velocity, followed by a comparison of the explained variance of the four domains. Results Birth weight and type of feeding were most importantly related to weight gain in the first six months. Breastfeeding versus formula feeding showed distinct growth patterns in the first six months, but not thereafter. From six months onwards, the ability to explain differences in weight gain decreased substantially (from R2 total = 38.7% to R2 total<7%). Conclusion Birth weight and breast feeding were most important to explain early weight gain, especially in the first six months of life. After the first six months of life other yet undetermined factors start to play a role. PMID:26192417

  9. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  10. 46 CFR 147.60 - Compressed gases.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Compressed gases. 147.60 Section 147.60 Shipping COAST... Other Special Requirements for Particular Materials § 147.60 Compressed gases. (a) Cylinder requirements. Cylinders used for containing hazardous ships' stores that are compressed gases must be— (1) Authorized for...

  11. Limits of Single-stage Compression in Centrifugal Superchargers for Aircraft

    NASA Technical Reports Server (NTRS)

    Kollmann, K

    1940-01-01

    The limits of the single-stage compression in superchargers at the present state of development are determined by five factors. 1) by the rotor material; 2) by the formation of the flow; 3) by the manufacture of double shrouded rotors; 4) by the bearing problem; 5) by the drive method.

  12. Compressing with dominant hand improves quality of manual chest compressions for rescuers who performed suboptimal CPR in manikins.

    PubMed

    Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin

    2015-07-01

    The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Cluster compression algorithm: A joint clustering/data compression concept

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.

    1977-01-01

    The Cluster Compression Algorithm (CCA), which was developed to reduce costs associated with transmitting, storing, distributing, and interpreting LANDSAT multispectral image data is described. The CCA is a preprocessing algorithm that uses feature extraction and data compression to more efficiently represent the information in the image data. The format of the preprocessed data enables simply a look-up table decoding and direct use of the extracted features to reduce user computation for either image reconstruction, or computer interpretation of the image data. Basically, the CCA uses spatially local clustering to extract features from the image data to describe spectral characteristics of the data set. In addition, the features may be used to form a sequence of scalar numbers that define each picture element in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. Various forms of the CCA are defined and experimental results are presented to show trade-offs and characteristics of the various implementations. Examples are provided that demonstrate the application of the cluster compression concept to multi-spectral images from LANDSAT and other sources.

  14. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  15. The retrograde delivery of adenovirus vector carrying the gene for brain-derived neurotrophic factor protects neurons and oligodendrocytes from apoptosis in the chronically compressed spinal cord of twy/twy mice.

    PubMed

    Uchida, Kenzo; Nakajima, Hideaki; Hirai, Takayuki; Yayama, Takafumi; Chen, Kebing; Guerrero, Alexander Rodriguez; Johnson, William Eustace; Baba, Hisatoshi

    2012-12-15

    The twy/twy mouse undergoes spontaneous chronic mechanical compression of the spinal cord; this in vivo model system was used to examine the effects of retrograde adenovirus (adenoviral vector [AdV])-mediated brain-derived neurotrophic factor (BDNF) gene delivery to spinal neural cells. To investigate the targeting and potential neuroprotective effect of retrograde AdV-mediated BDNF gene transfection in the chronically compressed spinal cord in terms of prevention of apoptosis of neurons and oligodendrocytes. Several studies have investigated the neuroprotective effects of neurotrophins, including BDNF, in spinal cord injury. However, no report has described the effects of retrograde neurotrophic factor gene delivery in compressed spinal cords, including gene targeting and the potential to prevent neural cell apoptosis. AdV-BDNF or AdV-LacZ (as a control gene) was injected into the bilateral sternomastoid muscles of 18-week old twy/twy mice for retrograde gene delivery via the spinal accessory motor neurons. Heterozygous Institute of Cancer Research mice (+/twy), which do not undergo spontaneous spinal compression, were used as a control for the effects of such compression on gene delivery. The localization and cell specificity of β-galactosidase expression (produced by LacZ gene transfection) and BDNF expression in the spinal cord were examined by coimmunofluorescence staining for neural cell markers (NeuN, neurons; reactive immunology protein, oligodendrocytes; glial fibrillary acidic protein, astrocytes; OX-42, microglia) 4 weeks after gene injection. The possible neuroprotection afforded by retrograde AdV-BDNF gene delivery versus AdV-LacZ-transfected control mice was assessed by scoring the prevalence of apoptotic cells (terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end labeling-positive cells) and immunoreactivity to active caspases -3, -8, and -9, p75, neurofilament 200 kD (NF), and for the oligodendroglial progenitor marker, NG2. RESULTS

  16. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  17. MP3 compression of Doppler ultrasound signals.

    PubMed

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  18. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  19. The influence of place on weight gain during early childhood: a population-based, longitudinal study.

    PubMed

    Carter, Megan Ann; Dubois, Lise; Tremblay, Mark S; Taljaard, Monica

    2013-04-01

    The objective of this paper was to determine the influence of place factors on weight gain in a contemporary cohort of children while also adjusting for early life and individual/family social factors. Participants from the Québec Longitudinal Study of Child Development comprised the sample for analysis (n = 1,580). A mixed-effects regression analysis was conducted to determine the longitudinal relationship between these place factors and standardized BMI, from age 4 to 10 years. The average relationship with time was found to be quadratic (rate of weight gain increased over time). Neighborhood material deprivation was found to be positively related to weight gain. Social deprivation, social disorder, and living in a medium density area were inversely related, while no association was found for social cohesion. Early life factors and genetic proxies appeared to be important in explaining weight gain in this sample. This study suggests that residential environments may play a role in childhood weight change; however, pathways are likely to be complex and interacting and perhaps not as important as early life factors and genetic proxies. Further work is required to clarify these relationships.

  20. 76 FR 4338 - Research and Development Strategies for Compressed & Cryo-Compressed Hydrogen Storage Workshops

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... DEPARTMENT OF ENERGY Research and Development Strategies for Compressed & Cryo- Compressed Hydrogen Storage Workshops AGENCY: Fuel Cell Technologies Program, Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of meeting. SUMMARY: The Systems Integration group of...

  1. Multichannel Compression, Temporal Cues, and Audibility.

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Turner, Christopher W.

    1998-01-01

    The effect of the reduction of the temporal envelope produced by multichannel compression on recognition was examined in 16 listeners with hearing loss, with particular focus on audibility of the speech signal. Multichannel compression improved speech recognition when superior audibility was provided by a two-channel compression system over linear…

  2. Compression-sensitive magnetic resonance elastography

    NASA Astrophysics Data System (ADS)

    Hirsch, Sebastian; Beyer, Frauke; Guo, Jing; Papazoglou, Sebastian; Tzschaetzsch, Heiko; Braun, Juergen; Sack, Ingolf

    2013-08-01

    Magnetic resonance elastography (MRE) quantifies the shear modulus of biological tissue to detect disease. Complementary to the shear elastic properties of tissue, the compression modulus may be a clinically useful biomarker because it is sensitive to tissue pressure and poromechanical interactions. In this work, we analyze the capability of MRE to measure volumetric strain and the dynamic bulk modulus (P-wave modulus) at a harmonic drive frequency commonly used in shear-wave-based MRE. Gel phantoms with various densities were created by introducing CO2-filled cavities to establish a compressible effective medium. The dependence of the effective medium's bulk modulus on phantom density was investigated via static compression tests, which confirmed theoretical predictions. The P-wave modulus of three compressible phantoms was calculated from volumetric strain measured by 3D wave-field MRE at 50 Hz drive frequency. The results demonstrate the MRE-derived volumetric strain and P-wave modulus to be sensitive to the compression properties of effective media. Since the reconstruction of the P-wave modulus requires third-order derivatives, noise remains critical, and P-wave moduli are systematically underestimated. Focusing on relative changes in the effective bulk modulus of tissue, compression-sensitive MRE may be useful for the noninvasive detection of diseases involving pathological pressure alterations such as hepatic hypertension or hydrocephalus.

  3. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    PubMed

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  4. Enhanced Performance of Streamline-Traced External-Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2015-01-01

    A computational design study was conducted to enhance the aerodynamic performance of streamline-traced, external-compression inlets for Mach 1.6. Compared to traditional external-compression, two-dimensional and axisymmetric inlets, streamline-traced inlets promise reduced cowl wave drag and sonic boom, but at the expense of reduced total pressure recovery and increased total pressure distortion. The current study explored a new parent flowfield for the streamline tracing and several variations of inlet design factors, including the axial displacement and angle of the subsonic cowl lip, the vertical placement of the engine axis, and the use of porous bleed in the subsonic diffuser. The performance was enhanced over that of an earlier streamline-traced inlet such as to increase the total pressure recovery and reduce total pressure distortion.

  5. GPU Lossless Hyperspectral Data Compression System

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.

    2014-01-01

    Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.

  6. Changes in compressed neurons from dogs with acute and severe cauda equina constrictions following intrathecal injection of brain-derived neurotrophic factor-conjugated polymer nanoparticles☆

    PubMed Central

    Tan, Junming; Shi, Jiangang; Shi, Guodong; Liu, Yanling; Liu, Xiaohong; Wang, Chaoyang; Chen, Dechun; Xing, Shunming; Shen, Lianbing; Jia, Lianshun; Ye, Xiaojian; He, Hailong; Li, Jiashun

    2013-01-01

    This study established a dog model of acute multiple cauda equina constriction by experimental constriction injury (48 hours) of the lumbosacral central processes in dorsal root ganglia neurons. The repair effect of intrathecal injection of brain-derived neurotrophic factor with 15 mg encapsulated biodegradable poly(lactide-co-glycolide) nanoparticles on this injury was then analyzed. Dorsal root ganglion cells (L7) of all experimental dogs were analyzed using hematoxylin-eosin staining and immunohistochemistry at 1, 2 and 4 weeks following model induction. Intrathecal injection of brain-derived neurotrophic factor can relieve degeneration and inflammation, and elevate the expression of brain-derived neurotrophic factor in sensory neurons of compressed dorsal root ganglion. Simultaneously, intrathecal injection of brain-derived neurotrophic factor obviously improved neurological function in the dog model of acute multiple cauda equina constriction. Results verified that sustained intraspinal delivery of brain-derived neurotrophic factor encapsulated in biodegradable nanoparticles promoted the repair of histomorphology and function of neurons within the dorsal root ganglia in dogs with acute and severe cauda equina syndrome. PMID:25206593

  7. Lossless compression of otoneurological eye movement signals.

    PubMed

    Tossavainen, Timo; Juhola, Martti

    2002-12-01

    We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.

  8. Effect of uniaxial stress on electroluminescence, valence band modification, optical gain, and polarization modes in tensile strained p-AlGaAs/GaAsP/n-AlGaAs laser diode structures: Numerical calculations and experimental results

    NASA Astrophysics Data System (ADS)

    Bogdanov, E. V.; Minina, N. Ya.; Tomm, J. W.; Kissel, H.

    2012-11-01

    The effects of uniaxial compression in [110] direction on energy-band structures, heavy and light hole mixing, optical matrix elements, and gain in laser diodes with "light hole up" configuration of valence band levels in GaAsP quantum wells with different widths and phosphorus contents are numerically calculated. The development of light and heavy hole mixing caused by symmetry lowering and converging behavior of light and heavy hole levels in such quantum wells under uniaxial compression is displayed. The light or heavy hole nature of each level is established for all considered values of uniaxial stress. The results of optical gain calculations for TM and TE polarization modes show that uniaxial compression leads to a significant increase of the TE mode and a minor decrease of the TM mode. Electroluminescence experiments were performed under uniaxial compression up to 5 kbar at 77 K on a model laser diode structure (p-AlxGa1-xAs/GaAs1-yPy/n-AlxGa1-xAs) with y = 0.16 and a quantum well width of 14 nm. They reveal a maximum blue shift of 27 meV of the electroluminescence spectra that is well described by the calculated change of the optical gap and the increase of the intensity being referred to a TE mode enhancement. Numerical calculations and electroluminescence data indicate that uniaxial compression may be used for a moderate wavelength and TM/TE intensity ratio tuning.

  9. Excess noise in gain-guided amplifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, I.H.; Garrison, J.C.; Wright, E.M.

    1991-06-01

    A second-quantized theory of the radiation field is used to study the origin of the excess noise observed in gain-guided amplifiers. We find that the reduction of the signal-to-noise ratio is a function of the length of the amplifier, and thus the enhancement of the noise is a propagation effect arising from longitudinally inhomogeneous gain of the noise rather than from an excess of local spontaneous emission. We confirm this conclusion by showing that the microscopic rate of spontaneous emission into a given non-power-orthogonal cavity mode is not enhanced by the Petermann factor. In addition, we illustrate the difficulties associatedmore » with photon statistics for this and other open systems by showing that no acceptable family of photon-number operators corresponds to a set of non-power-orthogonal cavity modes.« less

  10. Widefield compressive multiphoton microscopy.

    PubMed

    Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A

    2018-06-15

    A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.

  11. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  12. Neighborhood factors associated with physical activity and adequacy of weight gain during pregnancy

    EPA Science Inventory

    Healthy diet, physical activity, smoking, and adequate weight gain are all associated with maternal health and fetal growth during pregnancy. Neighborhood characteristics have been associated with poor maternal and child health outcomes, yet conceptualization of potential mechani...

  13. Magnetized Plasma Compression for Fusion Energy

    NASA Astrophysics Data System (ADS)

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David

    2013-10-01

    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  14. Gains and Losses of Transcription Factor Binding Sites in Saccharomyces cerevisiae and Saccharomyces paradoxus

    PubMed Central

    Schaefke, Bernhard; Wang, Tzi-Yuan; Wang, Chuen-Yi; Li, Wen-Hsiung

    2015-01-01

    Gene expression evolution occurs through changes in cis- or trans-regulatory elements or both. Interactions between transcription factors (TFs) and their binding sites (TFBSs) constitute one of the most important points where these two regulatory components intersect. In this study, we investigated the evolution of TFBSs in the promoter regions of different Saccharomyces strains and species. We divided the promoter of a gene into the proximal region and the distal region, which are defined, respectively, as the 200-bp region upstream of the transcription starting site and as the 200-bp region upstream of the proximal region. We found that the predicted TFBSs in the proximal promoter regions tend to be evolutionarily more conserved than those in the distal promoter regions. Additionally, Saccharomyces cerevisiae strains used in the fermentation of alcoholic drinks have experienced more TFBS losses than gains compared with strains from other environments (wild strains, laboratory strains, and clinical strains). We also showed that differences in TFBSs correlate with the cis component of gene expression evolution between species (comparing S. cerevisiae and its sister species Saccharomyces paradoxus) and within species (comparing two closely related S. cerevisiae strains). PMID:26220934

  15. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  16. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  17. Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence

    NASA Astrophysics Data System (ADS)

    Lynn, Jacob William

    We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no

  18. Compression of rehydratable vegetables and cereals

    NASA Technical Reports Server (NTRS)

    Burns, E. E.

    1978-01-01

    Characteristics of freeze-dried compressed carrots, such as rehydration, volatile retention, and texture, were studied by relating histological changes to textural quality evaluation, and by determining the effects of storage temperature on freeze-dried compressed carrot bars. Results show that samples compressed with a high moisture content undergo only slight structural damage and rehydrate quickly. Cellular disruption as a result of compression at low moisture levels was the main reason for rehydration and texture differences. Products prepared from carrot cubes having 48% moisture compared favorably with a freshly cooked product in cohesiveness and elasticity, but were found slightly harder and more chewy.

  19. Memory hierarchy using row-based compression

    DOEpatents

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  20. Corneal Staining and Hot Black Tea Compresses.

    PubMed

    Achiron, Asaf; Birger, Yael; Karmona, Lily; Avizemer, Haggay; Bartov, Elisha; Rahamim, Yocheved; Burgansky-Eliash, Zvia

    2017-03-01

    Warm compresses are widely touted as an effective treatment for ocular surface disorders. Black tea compresses are a common household remedy, although there is no evidence in the medical literature proving their effect and their use may lead to harmful side effects. To describe a case in which the application of black tea to an eye with a corneal epithelial defect led to anterior stromal discoloration; evaluate the prevalence of hot tea compress use; and analyze, in vitro, the discoloring effect of tea compresses on a model of a porcine eye. We assessed the prevalence of hot tea compresses in our community and explored the effect of warm tea compresses on the cornea when the corneal epithelium's integrity is disrupted. An in vitro experiment in which warm compresses were applied to 18 fresh porcine eyes was performed. In half the eyes a corneal epithelial defect was created and in the other half the epithelium was intact. Both groups were divided into subgroups of three eyes each and treated experimentally with warm black tea compresses, pure water, or chamomile tea compresses. We also performed a study in patients with a history of tea compress use. Brown discoloration of the anterior stroma appeared only in the porcine corneas that had an epithelial defect and were treated with black tea compresses. No other eyes from any group showed discoloration. Of the patients included in our survey, approximately 50% had applied some sort of tea ingredient as a solid compressor or as the hot liquid. An intact corneal epithelium serves as an effective barrier against tea-stain discoloration. Only when this layer is disrupted does the damage occur. Therefore, direct application of black tea (Camellia sinensis) to a cornea with an epithelial defect should be avoided.

  1. Adequacy of Prenatal Care and Gestational Weight Gain

    PubMed Central

    Crandell, Jamie L.; Jones-Vessey, Kathleen

    2016-01-01

    Abstract Background: The goal of prenatal care is to maximize health outcomes for a woman and her fetus. We examined how prenatal care is associated with meeting the 2009 Institute of Medicine (IOM) guidelines for gestational weight gain. Sample: The study used deidentified birth certificate data supplied by the North Carolina State Center for Health Statistics. The sample included 197,354 women (≥18 years) who delivered singleton full-term infants in 2011 and 2012. Methods: A generalized multinomial model was used to identify how adequate prenatal care was associated with the odds of gaining excessive or insufficient weight during pregnancy according to the 2009 IOM guidelines. The model adjusted for prepregnancy body size, sociodemographic factors, and birth weight. Results: A total of 197,354 women (≥18 years) delivered singleton full-term infants. The odds ratio (OR) for excessive weight gain was 2.44 (95% CI 2.37–2.50) in overweight and 2.33 (95% CI 2.27–2.40) in obese women compared with normal weight women. The OR for insufficient weight gain was 1.15 (95% CI 1.09–1.22) for underweight and 1.34 (95% CI 1.30–1.39) for obese women compared with normal weight women. Prenatal care at the inadequate or intermediate levels was associated with insufficient weight gain (OR: 1.32, 95% CI 1.27–1.38; OR: 1.15, 95% CI 1.09–1.21, respectively) compared with adequate prenatal care. Women with inadequate care were less likely to gain excessive weight (OR: 0.88, 95% CI 0.86–0.91). Conclusions: Whereas prenatal care was effective for preventing insufficient weight gain regardless of prepregnancy body size, educational background, and racial/ethnic group, there were no indications that adequate prenatal care was associated with reduced risk for excessive gestational weight gain. Further research is needed to improve prenatal care programs for preventing excess weight gain. PMID:26741198

  2. Shock-adiabatic to quasi-isentropic compression of warm dense helium up to 150 GPa

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Chen, Q. F.; Gu, Y. J.; Li, J. T.; Li, Z. G.; Li, C. J.; Chen, Z. Y.

    2017-06-01

    Multiple reverberation compression can achieve higher pressure, higher temperature, but lower entropy. It is available to provide an important validation for the elaborate and wider planetary models and simulate the inertial confinement fusion capsule implosion process. In the work, we have developed the thermodynamic and optical properties of helium from shock-adiabatic to quasi-isentropic compression by means of a multiple reverberation technique. By this technique, the initial dense gaseous helium was compressed to high pressure and high temperature and entered the warm dense matter (WDM) region. The experimental equation of state (EOS) of WDM helium in the pressure-density-temperature (P-ρ -T) range of 1 -150 GPa , 0.1 -1.1 g c m-3 , and 4600-24 000 K were measured. The optical radiations emanating from the WDM helium were recorded, and the particle velocity profiles detecting from the sample/window interface were obtained successfully up to 10 times compression. The optical radiation results imply that dense He has become rather opaque after the 2nd compression with a density of about 0.3 g c m-3 and a temperature of about 1 eV. The opaque states of helium under multiple compression were analyzed by the particle velocity measurements. The multiple compression technique could efficiently enhanced the density and the compressibility, and our multiple compression ratios (ηi=ρi/ρ0,i =1 -10 ) of helium are greatly improved from 3.5 to 43 based on initial precompressed density (ρ0) . For the relative compression ratio (ηi'=ρi/ρi -1) , it increases with pressure in the lower density regime and reversely decreases in the higher density regime, and a turning point occurs at the 3rd and 4th compression states under the different loading conditions. This nonmonotonic evolution of the compression is controlled by two factors, where the excitation of internal degrees of freedom results in the increasing compressibility and the repulsive interactions between the

  3. Gain Scheduling for the Orion Launch Abort Vehicle Controller

    NASA Technical Reports Server (NTRS)

    McNamara, Sara J.; Restrepo, Carolina I.; Madsen, Jennifer M.; Medina, Edgar A.; Proud, Ryan W.; Whitley, Ryan J.

    2011-01-01

    One of NASAs challenges for the Orion vehicle is the control system design for the Launch Abort Vehicle (LAV), which is required to abort safely at any time during the atmospheric ascent portion of ight. The focus of this paper is the gain design and scheduling process for a controller that covers the wide range of vehicle configurations and flight conditions experienced during the full envelope of potential abort trajectories from the pad to exo-atmospheric flight. Several factors are taken into account in the automation process for tuning the gains including the abort effectors, the environmental changes and the autopilot modes. Gain scheduling is accomplished using a linear quadratic regulator (LQR) approach for the decoupled, simplified linear model throughout the operational envelope in time, altitude and Mach number. The derived gains are then implemented into the full linear model for controller requirement validation. Finally, the gains are tested and evaluated in a non-linear simulation using the vehicles ight software to ensure performance requirements are met. An overview of the LAV controller design and a description of the linear plant models are presented. Examples of the most significant challenges with the automation of the gain tuning process are then discussed. In conclusion, the paper will consider the lessons learned through out the process, especially in regards to automation, and examine the usefulness of the gain scheduling tool and process developed as applicable to non-Orion vehicles.

  4. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  5. Friction of Compression-ignition Engines

    NASA Technical Reports Server (NTRS)

    Moore, Charles S; Collins, John H , Jr

    1936-01-01

    The cost in mean effective pressure of generating air flow in the combustion chambers of single-cylinder compression-ignition engines was determined for the prechamber and the displaced-piston types of combustion chamber. For each type a wide range of air-flow quantities, speeds, and boost pressures was investigated. Supplementary tests were made to determine the effect of lubricating-oil temperature, cooling-water temperature, and compression ratio on the friction mean effective pressure of the single-cylinder test engine. Friction curves are included for two 9-cylinder, radial, compression-ignition aircraft engines. The results indicate that generating the optimum forced air flow increased the motoring losses approximately 5 pounds per square inch mean effective pressure regardless of chamber type or engine speed. With a given type of chamber, the rate of increase in friction mean effective pressure with engine speed is independent of the air-flow speed. The effect of boost pressure on the friction cannot be predicted because the friction was decreased, unchanged, or increased depending on the combustion-chamber type and design details. High compression ratio accounts for approximately 5 pounds per square inch mean effective pressure of the friction of these single-cylinder compression-ignition engines. The single-cylinder test engines used in this investigation had a much higher friction mean effective pressure than conventional aircraft engines or than the 9-cylinder, radial, compression-ignition engines tested so that performance should be compared on an indicated basis.

  6. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  7. LOW-VELOCITY COMPRESSIBLE FLOW THEORY

    EPA Science Inventory

    The widespread application of incompressible flow theory dominates low-velocity fluid dynamics, virtually preventing research into compressible low-velocity flow dynamics. Yet, compressible solutions to simple and well-defined flow problems and a series of contradictions in incom...

  8. Subjective evaluation of mobile 3D video content: depth range versus compression artifacts

    NASA Astrophysics Data System (ADS)

    Jumisko-Pyykkö, Satu; Haustola, Tomi; Boev, Atanas; Gotchev, Atanas

    2011-02-01

    Mobile 3D television is a new form of media experience, which combines the freedom of mobility with the greater realism of presenting visual scenes in 3D. Achieving this combination is a challenging task as greater viewing experience has to be achieved with the limited resources of the mobile delivery channel such as limited bandwidth and power constrained handheld player. This challenge sets need for tight optimization of the overall mobile 3DTV system. Presence of depth and compression artifacts in the played 3D video are two major factors that influence viewer's subjective quality of experience and satisfaction. The primary goal of this study has been to examine the influence of varying depth and compression artifacts on the subjective quality of experience for mobile 3D video content. In addition, the influence of the studied variables on simulator sickness symptoms has been studied and vocabulary-based descriptive quality of experience has been conducted for a sub-set of variables in order to understand the perceptual characteristics in detail. In the experiment, 30 participants have evaluated the overall quality of different 3D video contents with varying depth ranges and compressed with varying quantization parameters. The test video content has been presented on a portable autostereoscopic LCD display with horizontal double density pixel arrangement. The results of the psychometric study indicate that compression artifacts are a dominant factor determining the quality of experience compared to varying depth range. More specifically, contents with strong compression has been rejected by the viewers and deemed unacceptable. The results of descriptive study confirm the dominance of visible spatial artifacts along the added value of depth for artifact-free content. The level of visual discomfort has been determined as not offending.

  9. Dogmas and controversies in compression therapy: report of an International Compression Club (ICC) meeting, Brussels, May 2011.

    PubMed

    Flour, Mieke; Clark, Michael; Partsch, Hugo; Mosti, Giovanni; Uhl, Jean-Francois; Chauveau, Michel; Cros, Francois; Gelade, Pierre; Bender, Dean; Andriessen, Anneke; Schuren, Jan; Cornu-Thenard, André; Arkans, Ed; Milic, Dragan; Benigni, Jean-Patrick; Damstra, Robert; Szolnoky, Gyozo; Schingale, Franz

    2013-10-01

    The International Compression Club (ICC) is a partnership between academics, clinicians and industry focused upon understanding the role of compression in the management of different clinical conditions. The ICC meet regularly and from these meetings have produced a series of eight consensus publications upon topics ranging from evidence-based compression to compression trials for arm lymphoedema. All of the current consensus documents can be accessed on the ICC website (http://www.icc-compressionclub.com/index.php). In May 2011, the ICC met in Brussels during the European Wound Management Association (EWMA) annual conference. With almost 50 members in attendance, the day-long ICC meeting challenged a series of dogmas and myths that exist when considering compression therapies. In preparation for a discussion on beliefs surrounding compression, a forum was established on the ICC website where presenters were able to display a summary of their thoughts upon each dogma to be discussed during the meeting. Members of the ICC could then provide comments on each topic thereby widening the discussion to the entire membership of the ICC rather than simply those who were attending the EWMA conference. This article presents an extended report of the issues that were discussed, with each dogma covered in a separate section. The ICC discussed 12 'dogmas' with areas 1 through 7 dedicated to materials and application techniques used to apply compression with the remaining topics (8 through 12) related to the indications for using compression. © 2012 The Authors. International Wound Journal © 2012 John Wiley & Sons Ltd and Medicalhelplines.com Inc.

  10. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    PubMed

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  11. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    PubMed Central

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun

    2018-01-01

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256×13 real-time radar image display with a throughput of 28.2 frames per second. PMID:29621170

  12. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  13. Transverse compression of PPTA fibers

    NASA Astrophysics Data System (ADS)

    Singletary, James

    2000-07-01

    Results of single transverse compression testing of PPTA and PIPD fibers, using a novel test device, are presented and discussed. In the tests, short lengths of single fibers are compressed between two parallel, stiff platens. The fiber elastic deformation is analyzed as a Hertzian contact problem. The inelastic deformation is analyzed by elastic-plastic FE simulation and by laser-scanning confocal microscopy of the compressed fibers ex post facto. The results obtained are compared to those in the literature and to the theoretical predictions of PPTA fiber transverse elasticity based on PPTA crystal elasticity.

  14. Data Compression Using the Dictionary Approach Algorithm

    DTIC Science & Technology

    1990-12-01

    Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM

  15. Application of improved technology to a preprototype vapor compression distillation /VCD/ water recovery subsystem

    NASA Technical Reports Server (NTRS)

    Johnson, K. L.; Reysa, R. P.; Fricks, D. H.

    1981-01-01

    Vapor compression distillation (VCD) is considered the most efficient water recovery process for spacecraft application. This paper reports on a preprototype VCD which has undergone the most extensive operational and component development testing of any VCD subsystem to date. The component development effort was primarily aimed at eliminating corrosion and the need for lubrication, upgrading electronics, and substituting nonmetallics in key rotating components. The VCD evolution is documented by test results on specific design and/or materials changes. Innovations worthy of further investigation and additional testing are summarized for future VCD subsystem development reference. Conclusions on experience gained are presented.

  16. A hybrid data compression approach for online backup service

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  17. Computational Simulation of Breast Compression Based on Segmented Breast and Fibroglandular Tissues on Magnetic Resonance Images

    PubMed Central

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-01-01

    This study presents a finite element based computational model to simulate the three-dimensional deformation of the breast and the fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and the craniocaudal and mediolateral oblique compression as used in mammography was applied. The geometry of whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo® 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the non-linear elastic tissue deformation under compression, using the MSC.Marc® software package. The model was tested in 4 cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these 4 cases at 60% compression ratio was in the range of 5-7 cm, which is the typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at 60% compression ratio was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on MRI, which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density measurements needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities – such as MRI, mammography, whole breast ultrasound, and molecular imaging – that are performed using different body positions and different compression conditions. PMID:20601773

  18. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images.

    PubMed

    Shih, Tzu-Ching; Chen, Jeon-Hor; Liu, Dongxu; Nie, Ke; Sun, Lizhi; Lin, Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under different compression conditions.

  19. ERGC: an efficient referential genome compression algorithm

    PubMed Central

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636

  20. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  1. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  2. Improved compression technique for multipass color printers

    NASA Astrophysics Data System (ADS)

    Honsinger, Chris

    1998-01-01

    A multipass color printer prints a color image by printing one color place at a time in a prescribed order, e.g., in a four-color systems, the cyan plane may be printed first, the magenta next, and so on. It is desirable to discard the data related to each color plane once it has been printed, so that data from the next print may be downloaded. In this paper, we present a compression scheme that allows the release of a color plane memory, but still takes advantage of the correlation between the color planes. The compression scheme is based on a block adaptive technique for decorrelating the color planes followed by a spatial lossy compression of the decorrelated data. A preferred method of lossy compression is the DCT-based JPEG compression standard, as it is shown that the block adaptive decorrelation operations can be efficiently performed in the DCT domain. The result of the compression technique are compared to that of using JPEG on RGB data without any decorrelating transform. In general, the technique is shown to improve the compression performance over a practical range of compression ratios by at least 30 percent in all images, and up to 45 percent in some images.

  3. Simulating compressible-incompressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; van Wachem, Berend

    2017-11-01

    Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.

  4. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  5. Exploring compression techniques for ROOT IO

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  6. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  7. Homogenous charge compression ignition engine having a cylinder including a high compression space

    DOEpatents

    Agama, Jorge R.; Fiveland, Scott B.; Maloney, Ronald P.; Faletti, James J.; Clarke, John M.

    2003-12-30

    The present invention relates generally to the field of homogeneous charge compression engines. In these engines, fuel is injected upstream or directly into the cylinder when the power piston is relatively close to its bottom dead center position. The fuel mixes with air in the cylinder as the power piston advances to create a relatively lean homogeneous mixture that preferably ignites when the power piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. Thus, the present invention divides the homogeneous charge between a controlled volume higher compression space and a lower compression space to better control the start of ignition.

  8. Implementation of Compressed Work Schedules: Participation and Job Redesign as Critical Factors for Employee Acceptance.

    ERIC Educational Resources Information Center

    Latack, Janina C.; Foster, Lawrence W.

    1985-01-01

    Analyzes the effects of an implementation of a three-day/thirty-eight hour (3/38) work schedule among information systems personnel (N=84). Data showed that 18 months after implementation, 3/38 employees still strongly favor the compressed schedule. Data also suggest substantial organizational payoffs including reductions in sick time, overtime,…

  9. Assessment of compressive failure process of cortical bone materials using damage-based model.

    PubMed

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Psychophysical Comparisons in Image Compression Algorithms.

    DTIC Science & Technology

    1999-03-01

    Leister, M., "Lossy Lempel - Ziv Algorithm for Large Alphabet Sources and Applications to Image Compression ," IEEE Proceedings, v.I, pp. 225-228, September...1623-1642, September 1990. Sanford, M.A., An Analysis of Data Compression Algorithms used in the Transmission of Imagery, Master’s Thesis, Naval...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS PSYCHOPHYSICAL COMPARISONS IN IMAGE COMPRESSION ALGORITHMS by % Christopher J. Bodine • March

  11. The effects of compressive preloads on the compression-after-impact strength of carbon/epoxy

    NASA Technical Reports Server (NTRS)

    Nettles, A. T.; Lance, D. G.

    1992-01-01

    A preloading device was used to examine the effects of compressive prestress on the compression-after-impact (CAI) strength of 16-ply, quasi-isotropic carbon epoxy test coupons. T300/934 material was evaluated at preloads from 200 to 4000 lb at impact energies from 1 to 9 joules. IM7/8551-7 material was evaluated at preloads from 4000 to 10,000 lb at impact energies from 4 to 16 joules. Advanced design of experiments methodology was used to design and evaluate the test matrices. The results showed that no statistically significant change in CAI strength could be contributed to the amount of compressive preload applied to the specimen.

  12. Algorithms for the prediction of retinopathy of prematurity based on postnatal weight gain.

    PubMed

    Binenbaum, Gil

    2013-06-01

    Current ROP screening guidelines represent a simple risk model with two dichotomized factors, birth weight and gestational age at birth. Pioneering work has shown that tracking postnatal weight gain, a surrogate for low insulin-like growth factor 1, may capture the influence of many other ROP risk factors and improve risk prediction. Models including weight gain, such as WINROP, ROPScore, and CHOP ROP, have demonstrated accurate ROP risk assessment and a potentially large reduction in ROP examinations, compared to current guidelines. However, there is a need for larger studies, and generalizability is limited in countries with developing neonatal care systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  14. Dynamic Compression of Chondrocyte-Agarose Constructs Reveals New Candidate Mechanosensitive Genes

    PubMed Central

    Bougault, Carole; Aubert-Foucher, Elisabeth; Paumier, Anne; Perrier-Groult, Emeline; Huot, Ludovic; Hot, David; Duterque-Coquillaud, Martine; Mallein-Gerin, Frédéric

    2012-01-01

    Articular cartilage is physiologically exposed to repeated loads. The mechanical properties of cartilage are due to its extracellular matrix, and homeostasis is maintained by the sole cell type found in cartilage, the chondrocyte. Although mechanical forces clearly control the functions of articular chondrocytes, the biochemical pathways that mediate cellular responses to mechanical stress have not been fully characterised. The aim of our study was to examine early molecular events triggered by dynamic compression in chondrocytes. We used an experimental system consisting of primary mouse chondrocytes embedded within an agarose hydrogel; embedded cells were pre-cultured for one week and subjected to short-term compression experiments. Using Western blots, we demonstrated that chondrocytes maintain a differentiated phenotype in this model system and reproduce typical chondrocyte-cartilage matrix interactions. We investigated the impact of dynamic compression on the phosphorylation state of signalling molecules and genome-wide gene expression. After 15 min of dynamic compression, we observed transient activation of ERK1/2 and p38 (members of the mitogen-activated protein kinase (MAPK) pathways) and Smad2/3 (members of the canonical transforming growth factor (TGF)-β pathways). A microarray analysis performed on chondrocytes compressed for 30 min revealed that only 20 transcripts were modulated more than 2-fold. A less conservative list of 325 modulated genes included genes related to the MAPK and TGF-β pathways and/or known to be mechanosensitive in other biological contexts. Of these candidate mechanosensitive genes, 85% were down-regulated. Down-regulation may therefore represent a general control mechanism for a rapid response to dynamic compression. Furthermore, modulation of transcripts corresponding to different aspects of cellular physiology was observed, such as non-coding RNAs or primary cilium. This study provides new insight into how chondrocytes respond

  15. Developmental gains in visuospatial memory predict gains in mathematics achievement.

    PubMed

    Li, Yaoran; Geary, David C

    2013-01-01

    Visuospatial competencies are related to performance in mathematical domains in adulthood, but are not consistently related to mathematics achievement in children. We confirmed the latter for first graders and demonstrated that children who show above average first-to-fifth grade gains in visuospatial memory have an advantage over other children in mathematics. The study involved the assessment of the mathematics and reading achievement of 177 children in kindergarten to fifth grade, inclusive, and their working memory capacity and processing speed in first and fifth grade. Intelligence was assessed in first grade and their second to fourth grade teachers reported on their in-class attentive behavior. Developmental gains in visuospatial memory span (d = 2.4) were larger than gains in the capacity of the central executive (d = 1.6) that in turn were larger than gains in phonological memory span (d = 1.1). First to fifth grade gains in visuospatial memory and in speed of numeral processing predicted end of fifth grade mathematics achievement, as did first grade central executive scores, intelligence, and in-class attentive behavior. The results suggest there are important individual differences in the rate of growth of visuospatial memory during childhood and that these differences become increasingly important for mathematics learning.

  16. Developmental Gains in Visuospatial Memory Predict Gains in Mathematics Achievement

    PubMed Central

    Li, Yaoran; Geary, David C.

    2013-01-01

    Visuospatial competencies are related to performance in mathematical domains in adulthood, but are not consistently related to mathematics achievement in children. We confirmed the latter for first graders and demonstrated that children who show above average first-to-fifth grade gains in visuospatial memory have an advantage over other children in mathematics. The study involved the assessment of the mathematics and reading achievement of 177 children in kindergarten to fifth grade, inclusive, and their working memory capacity and processing speed in first and fifth grade. Intelligence was assessed in first grade and their second to fourth grade teachers reported on their in-class attentive behavior. Developmental gains in visuospatial memory span (d = 2.4) were larger than gains in the capacity of the central executive (d = 1.6) that in turn were larger than gains in phonological memory span (d = 1.1). First to fifth grade gains in visuospatial memory and in speed of numeral processing predicted end of fifth grade mathematics achievement, as did first grade central executive scores, intelligence, and in-class attentive behavior. The results suggest there are important individual differences in the rate of growth of visuospatial memory during childhood and that these differences become increasingly important for mathematics learning. PMID:23936154

  17. An efficient compression scheme for bitmap indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query

  18. Compression set in gas-blown condensation-cured polysiloxane elastomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, Mogon; Chinn, Sarah; Maxwell, Robert S.

    2010-12-01

    Accelerated thermal ageing studies on foamed condensation cured polysiloxane materials have been performed in support of life assessment and material replacement programmes. Two different types of filled hydrogen-blown and condensation cured polysiloxane foams were tested; commercial (RTV S5370), and an in-house formulated polysiloxane elastomer (Silfoam). Compression set properties were investigated using Thermomechanical (TMA) studies and compared against two separate longer term ageing trials carried out in air and in dry inert gas atmospheres using compression jigs. Isotherms measured from these studies were assessed using time-temperature (T/t) superposition. Acceleration factors were determined and fitted to Arrhenius kinetics. For both materials, themore » thermo-mechanical results were found to closely follow the longer term accelerated ageing trials. Comparison of the accelerated ageing data in dry nitrogen atmospheres against field trial results showed the accelerated ageing trends over predict, however the comparison is difficult as the field data suffer from significant component to component variability. Of the long term ageing trials reported here, those carried out in air deviate more significantly from field trials data compared to those carried out in dry nitrogen atmospheres. For field return samples, there is evidence for residual post-curing reactions influencing mechanical performance, which would accelerate compression set. Multiple quantum-NMR studies suggest that compression set is not associated with significant changes in net crosslink density, but that some degree of network rearrangement has occurred due to viscoelastic relaxation as well as bond breaking and forming processes, with possible post-curing reactions at early times.« less

  19. A New Approach for Fingerprint Image Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefactsmore » which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.« less

  20. A biological compression model and its applications.

    PubMed

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  1. Design and development of novel bandages for compression therapy.

    PubMed

    Rajendran, Subbiyan; Anand, Subhash

    2003-03-01

    During the past few years there have been increasing concerns relating to the performance of bandages, especially their pressure distribution properties for the treatment of venous leg ulcers. This is because compression therapy is a complex system and requires two or multi-layer bandages, and the performance properties of each layer differs from other layers. The widely accepted sustained graduated compression mainly depends on the uniform pressure distribution of different layers of bandages, in which textile fibres and bandage structures play a major role. This article examines how the fibres, fibre blends and structures influence the absorption and pressure distribution properties of bandages. It is hoped that the research findings will help medical professionals, especially nurses, to gain an insight into the development of bandages. A total of 12 padding bandages have been produced using various fibres and fibre blends. A new technique that would facilitate good resilience and cushioning properties, higher and more uniform pressure distribution and enhanced water absorption and retention was adopted during the production. It has been found that the properties of developed padding bandages, which include uniform pressure distribution around the leg, are superior to existing commercial bandages and possess a number of additional properties required to meet the criteria stipulated for an ideal padding bandage. Results have indicated that none of the mostly used commercial padding bandages provide the required uniform pressure distribution around the limb.

  2. Optimisation algorithms for ECG data compression.

    PubMed

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  3. High gain solar photovoltaics

    NASA Astrophysics Data System (ADS)

    MacDonald, B.; Finot, M.; Heiken, B.; Trowbridge, T.; Ackler, H.; Leonard, L.; Johnson, E.; Chang, B.; Keating, T.

    2009-08-01

    Skyline Solar Inc. has developed a novel silicon-based PV system to simultaneously reduce energy cost and improve scalability of solar energy. The system achieves high gain through a combination of high capacity factor and optical concentration. The design approach drives innovation not only into the details of the system hardware, but also into manufacturing and deployment-related costs and bottlenecks. The result of this philosophy is a modular PV system whose manufacturing strategy relies only on currently existing silicon solar cell, module, reflector and aluminum parts supply chains, as well as turnkey PV module production lines and metal fabrication industries that already exist at enormous scale. Furthermore, with a high gain system design, the generating capacity of all components is multiplied, leading to a rapidly scalable system. The product design and commercialization strategy cooperate synergistically to promise dramatically lower LCOE with substantially lower risk relative to materials-intensive innovations. In this paper, we will present the key design aspects of Skyline's system, including aspects of the optical, mechanical and thermal components, revealing the ease of scalability, low cost and high performance. Additionally, we will present performance and reliability results on modules and the system, using ASTM and UL/IEC methodologies.

  4. Nonpainful wide-area compression inhibits experimental pain

    PubMed Central

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-01-01

    Abstract Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM. PMID:27152691

  5. ERGC: an efficient referential genome compression algorithm.

    PubMed

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Nonpainful wide-area compression inhibits experimental pain.

    PubMed

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  7. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  8. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  9. Near-wall modeling of compressible turbulent flow

    NASA Technical Reports Server (NTRS)

    So, Ronald M. C.

    1991-01-01

    A near-wall two-equation model for compressible flows is proposed. The model is formulated by relaxing the assumption of dynamic field similarity between compressible and incompressible flows. A postulate is made to justify the extension of incompressible models to ammount for compressibility effects. This requires formulation the turbulent kinetic energy equation in a form similar to its incompressible counterpart. As a result, the compressible dissipation function has to be split into a solenoidal part, which is not sensitive to changes of compressibility indicators, and a dilatational part, which is directly affected by these changes. A model with an explicit dependence on the turbulent Mach number is proposed for the dilatational dissipation rate.

  10. Advances in high throughput DNA sequence data compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz

    2016-06-01

    Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.

  11. Compression and fast retrieval of SNP data.

    PubMed

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Methodology for the Design of Streamline-Traced External-Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2014-01-01

    A design methodology based on streamline-tracing is discussed for the design of external-compression, supersonic inlets for flight below Mach 2.0. The methodology establishes a supersonic compression surface and capture cross-section by tracing streamlines through an axisymmetric Busemann flowfield. The compression system of shock and Mach waves is altered through modifications to the leading edge and shoulder of the compression surface. An external terminal shock is established to create subsonic flow which is diffused in the subsonic diffuser. The design methodology was implemented into the SUPIN inlet design tool. SUPIN uses specified design factors to design the inlets and computes the inlet performance, which includes the flow rates, total pressure recovery, and wave drag. A design study was conducted using SUPIN and the Wind-US computational fluid dynamics code to design and analyze the properties of two streamline-traced, external-compression (STEX) supersonic inlets for Mach 1.6 freestream conditions. The STEX inlets were compared to axisymmetric pitot, two-dimensional, and axisymmetric spike inlets. The STEX inlets had slightly lower total pressure recovery and higher levels of total pressure distortion than the axisymmetric spike inlet. The cowl wave drag coefficients of the STEX inlets were 20% of those for the axisymmetric spike inlet. The STEX inlets had external sound pressures that were 37% of those of the axisymmetric spike inlet, which may result in lower adverse sonic boom characteristics. The flexibility of the shape of the capture cross-section may result in benefits for the integration of STEX inlets with aircraft.

  13. Effects of compression and individual variability on face recognition performance

    NASA Astrophysics Data System (ADS)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  14. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  15. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  16. Gain weighted eigenspace assignment

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Andrisani, Dominick, II

    1994-01-01

    This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.

  17. Data Compression With Application to Geo-Location

    DTIC Science & Technology

    2010-08-01

    wireless sensor network requires the estimation of time-difference-of-arrival (TDOA) parameters using data collected by a set of spatially separated sensors. Compressing the data that is shared among the sensors can provide tremendous savings in terms of the energy and transmission latency. Traditional MSE and perceptual based data compression schemes fail to accurately capture the effects of compression on the TDOA estimation task; therefore, it is necessary to investigate compression algorithms suitable for TDOA parameter estimation. This thesis explores the

  18. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    PubMed

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  19. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  20. Compressed gas fuel storage system

    DOEpatents

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.