Sample records for limited coding capacity

  1. Quantum-capacity-approaching codes for the detected-jump channel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassl, Markus; Wei Zhaohui; Ji Zhengfeng

    2010-12-15

    The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less

  2. Effects of Voice Coding and Speech Rate on a Synthetic Speech Display in a Telephone Information System

    DTIC Science & Technology

    1988-05-01

    Seeciv Limited- System for varying Senses term filter capacity output until some Figure 2. Original limited-capacity channel model (Frim Broadbent, 1958) S...2 Figure 2. Original limited-capacity channel model (From Broadbent, 1958) .... 10 Figure 3. Experimental...unlimited variety of human voices for digital recording sources. Synthesis by Analysis Analysis-synthesis methods electronically model the human voice

  3. Quantum Dense Coding About a Two-Qubit Heisenberg XYZ Model

    NASA Astrophysics Data System (ADS)

    Xu, Hui-Yun; Yang, Guo-Hui

    2017-09-01

    By taking into account the nonuniform magnetic field, the quantum dense coding with thermal entangled states of a two-qubit anisotropic Heisenberg XYZ chain are investigated in detail. We mainly show the different properties about the dense coding capacity ( χ) with the changes of different parameters. It is found that dense coding capacity χ can be enhanced by decreasing the magnetic field B, the degree of inhomogeneity b and temperature T, or increasing the coupling constant along z-axis J z . In addition, we also find χ remains the stable value as the change of the anisotropy of the XY plane Δ in a certain temperature condition. Through studying different parameters effect on χ, it presents that we can properly turn the values of B, b, J z , Δ or adjust the temperature T to obtain a valid dense coding capacity ( χ satisfies χ > 1). Moreover, the temperature plays a key role in adjusting the value of dense coding capacity χ. The valid dense coding capacity could be always obtained in the lower temperature-limit case.

  4. Capacity of a direct detection optical communication channel

    NASA Technical Reports Server (NTRS)

    Tan, H. H.

    1980-01-01

    The capacity of a free space optical channel using a direct detection receiver is derived under both peak and average signal power constraints and without a signal bandwidth constraint. The addition of instantaneous noiseless feedback from the receiver to the transmitter does not increase the channel capacity. In the absence of received background noise, an optimally coded PPM system is shown to achieve capacity in the limit as signal bandwidth approaches infinity. In the case of large peak to average signal power ratios, an interleaved coding scheme with PPM modulation is shown to have a computational cutoff rate far greater than ordinary coding schemes.

  5. Coherent-state constellations and polar codes for thermal Gaussian channels

    NASA Astrophysics Data System (ADS)

    Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.

    2017-06-01

    Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.

  6. Interleaved concatenated codes: new perspectives on approaching the Shannon limit.

    PubMed

    Viterbi, A J; Viterbi, A M; Sindhushayana, N T

    1997-09-02

    The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit.

  7. LDPC-coded MIMO optical communication over the atmospheric turbulence channel using Q-ary pulse-position modulation.

    PubMed

    Djordjevic, Ivan B

    2007-08-06

    We describe a coded power-efficient transmission scheme based on repetition MIMO principle suitable for communication over the atmospheric turbulence channel, and determine its channel capacity. The proposed scheme employs the Q-ary pulse-position modulation. We further study how to approach the channel capacity limits using low-density parity-check (LDPC) codes. Component LDPC codes are designed using the concept of pairwise-balanced designs. Contrary to the several recent publications, bit-error rates and channel capacities are reported assuming non-ideal photodetection. The atmospheric turbulence channel is modeled using the Gamma-Gamma distribution function due to Al-Habash et al. Excellent bit-error rate performance improvement, over uncoded case, is found.

  8. Interleaved concatenated codes: New perspectives on approaching the Shannon limit

    PubMed Central

    Viterbi, A. J.; Viterbi, A. M.; Sindhushayana, N. T.

    1997-01-01

    The last few years have witnessed a significant decrease in the gap between the Shannon channel capacity limit and what is practically achievable. Progress has resulted from novel extensions of previously known coding techniques involving interleaved concatenated codes. A considerable body of simulation results is now available, supported by an important but limited theoretical basis. This paper presents a computational technique which further ties simulation results to the known theory and reveals a considerable reduction in the complexity required to approach the Shannon limit. PMID:11038568

  9. Analysis of Optical CDMA Signal Transmission: Capacity Limits and Simulation Results

    NASA Astrophysics Data System (ADS)

    Garba, Aminata A.; Yim, Raymond M. H.; Bajcsy, Jan; Chen, Lawrence R.

    2005-12-01

    We present performance limits of the optical code-division multiple-access (OCDMA) networks. In particular, we evaluate the information-theoretical capacity of the OCDMA transmission when single-user detection (SUD) is used by the receiver. First, we model the OCDMA transmission as a discrete memoryless channel, evaluate its capacity when binary modulation is used in the interference-limited (noiseless) case, and extend this analysis to the case when additive white Gaussian noise (AWGN) is corrupting the received signals. Next, we analyze the benefits of using nonbinary signaling for increasing the throughput of optical CDMA transmission. It turns out that up to a fourfold increase in the network throughput can be achieved with practical numbers of modulation levels in comparison to the traditionally considered binary case. Finally, we present BER simulation results for channel coded binary and[InlineEquation not available: see fulltext.]-ary OCDMA transmission systems. In particular, we apply turbo codes concatenated with Reed-Solomon codes so that up to several hundred concurrent optical CDMA users can be supported at low target bit error rates. We observe that unlike conventional OCDMA systems, turbo-empowered OCDMA can allow overloading (more active users than is the length of the spreading sequences) with good bit error rate system performance.

  10. Capacity, cutoff rate, and coding for a direct-detection optical channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1980-01-01

    It is shown that Pierce's pulse position modulation scheme with 2 to the L pulse positions used on a self-noise-limited direct detection optical communication channel results in a 2 to the L-ary erasure channel that is equivalent to the parallel combination of L completely correlated binary erasure channels. The capacity of the full channel is the sum of the capacities of the component channels, but the cutoff rate of the full channel is shown to be much smaller than the sum of the cutoff rates. An interpretation of the cutoff rate is given that suggests a complexity advantage in coding separately on the component channels. It is shown that if short-constraint-length convolutional codes with Viterbi decoders are used on the component channels, then the performance and complexity compare favorably with the Reed-Solomon coding system proposed by McEliece for the full channel. The reasons for this unexpectedly fine performance by the convolutional code system are explored in detail, as are various facets of the channel structure.

  11. Bandwidth efficient coding: Theoretical limits and real achievements. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Courturier, Servanne; Levy, Yannick; Mills, Diane G.; Perez, Lance C.; Wang, Fu-Quan

    1993-01-01

    In his seminal 1948 paper 'The Mathematical Theory of Communication,' Claude E. Shannon derived the 'channel coding theorem' which has an explicit upper bound, called the channel capacity, on the rate at which 'information' could be transmitted reliably on a given communication channel. Shannon's result was an existence theorem and did not give specific codes to achieve the bound. Some skeptics have claimed that the dramatic performance improvements predicted by Shannon are not achievable in practice. The advances made in the area of coded modulation in the past decade have made communications engineers optimistic about the possibility of achieving or at least coming close to channel capacity. Here we consider the possibility in the light of current research results.

  12. Slot-like capacity and resource-like coding in a neural model of multiple-item working memory.

    PubMed

    Standage, Dominic; Pare, Martin

    2018-06-27

    For the past decade, research on the storage limitations of working memory has been dominated by two fundamentally different hypotheses. On the one hand, the contents of working memory may be stored in a limited number of `slots', each with a fixed resolution. On the other hand, any number of items may be stored, but with decreasing resolution. These two hypotheses have been invaluable in characterizing the computational structure of working memory, but neither provides a complete account of the available experimental data, nor speaks to the neural basis of the limitations it characterizes. To address these shortcomings, we simulated a multiple-item working memory task with a cortical network model, the cellular resolution of which allowed us to quantify the coding fidelity of memoranda as a function of memory load, as measured by the discriminability, regularity and reliability of simulated neural spiking. Our simulations account for a wealth of neural and behavioural data from human and non-human primate studies, and they demonstrate that feedback inhibition lowers both capacity and coding fidelity. Because the strength of inhibition scales with the number of items stored by the network, increasing this number progressively lowers fidelity until capacity is reached. Crucially, the model makes specific, testable predictions for neural activity on multiple-item working memory tasks.

  13. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    PubMed

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  14. Assessing Capacity for Sustainability of Effective Programs and Policies in Local Health Departments.

    PubMed

    Tabak, Rachel G; Duggan, Katie; Smith, Carson; Aisaka, Kristelle; Moreland-Russell, Sarah; Brownson, Ross C

    2016-01-01

    Sustainability has been defined as the existence of structures and processes that allow a program to leverage resources to effectively implement and maintain evidence-based public health and is important in local health departments (LHDs) to retain the benefits of effective programs. Explore the applicability of the Program Sustainability Framework in high- and low-capacity LHDs as defined by national performance standards. Case study interviews from June to July 2013. Standard qualitative methodology was used to code transcripts; codes were developed inductively and deductively. Six geographically diverse LHD's (selected from 3 of high and 3 of low capacity) : 35 LHD practitioners. Thematic reports explored the 8 domains (Organizational Capacity, Program Adaptation, Program Evaluation, Communications, Strategic Planning, Funding Stability, Environmental Support, and Partnerships) of the Program Sustainability Framework. High-capacity LHDs described having environmental support, while low-capacity LHDs reported this was lacking. Both high- and low-capacity LHDs described limited funding; however, high-capacity LHDs reported greater funding flexibility. Partnerships were important to high- and low-capacity LHDs, and both described building partnerships to sustain programming. Regarding organizational capacity, high-capacity LHDs reported better access to and support for adequate staff and staff training when compared with low-capacity LHDs. While high-capacity LHDs described integration of program evaluation into implementation and sustainability, low-capacity LHDs reported limited capacity for measurement specifically and evaluation generally. When high-capacity LHDs described program adoption, they discussed an opportunity to adapt and evaluate. Low-capacity LHDs struggled with programs requiring adaptation. High-capacity LHDs described higher quality communication than low-capacity LHDs. High- and low-capacity LHDs described strategic planning, but high-capacity LHDs reported efforts to integrate evidence-based public health. Investments in leadership support for improving organizational capacity, improvements in communication from the top of the organization, integrating program evaluation into implementation, and greater funding flexibility may enhance sustainability of evidence-based public health in LHDs.

  15. Fundamentals of Free-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Moision, Bruce; Erkmen, Baris

    2012-01-01

    Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.

  16. On ways to overcome the magical capacity limit of working memory.

    PubMed

    Turi, Zsolt; Alekseichuk, Ivan; Paulus, Walter

    2018-04-01

    The ability to simultaneously process and maintain multiple pieces of information is limited. Over the past 50 years, observational methods have provided a large amount of insight regarding the neural mechanisms that underpin the mental capacity that we refer to as "working memory." More than 20 years ago, a neural coding scheme was proposed for working memory. As a result of technological developments, we can now not only observe but can also influence brain rhythms in humans. Building on these novel developments, we have begun to externally control brain oscillations in order to extend the limits of working memory.

  17. Capacity of noncoherent MFSK channels

    NASA Technical Reports Server (NTRS)

    Bar-David, I.; Butman, S. A.; Klass, M. J.; Levitt, B. K.; Lyon, R. F.

    1974-01-01

    Performance limits theoretically achievable over noncoherent channels perturbed by additive Gaussian noise in hard decision, optimal, and soft decision receivers are computed as functions of the number of orthogonal signals and the predetection signal-to-noise ratio. Equations are derived for orthogonal signal capacity, the ultimate MFSK capacity, and the convolutional coding and decoding limit. It is shown that performance improves as the signal-to-noise ratio increases, provided the bandwidth can be increased, that the optimum number of signals is not infinite (except for the optimal receiver), and that the optimum number decreases as the signal-to-noise ratio decreases, but is never less than 7 for even the hard decision receiver.

  18. A rate-compatible family of protograph-based LDPC codes built by expurgation and lengthening

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam

    2005-01-01

    We construct a protograph-based rate-compatible family of low-density parity-check codes that cover a very wide range of rates from 1/2 to 16/17, perform within about 0.5 dB of their capacity limits for all rates, and can be decoded conveniently and efficiently with a common hardware implementation.

  19. [INVITED] Luminescent QR codes for smart labelling and sensing

    NASA Astrophysics Data System (ADS)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  20. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  1. Sh ble and Cre adapted for functional genomics and metabolic engineering of Pichia stipitis

    Treesearch

    Jose M. Laplaza; Beatriz Rivas Torres; Yong-Su Jin; Thomas W. Jeffries

    2006-01-01

    Pichia stipitis is widely studied for its capacity to ferment d-xylose to ethanol. Strain improvement has been facilitated by recent completion of the P. stipitis genome. P. stipitis uses CUG to code for serine rather than leucine, as is the case for the universal genetic code thereby limiting the availability of heterologous drug resistance markers for transformation...

  2. Cloning Components of Human Telomerase.

    DTIC Science & Technology

    1999-07-01

    et al. 1990). Somatic cells have a limited replicative capacity ( Hayflick 1961), and the lack of telomerase seems to be the reason for this, since...expression of telomerase in otherwise normal fibroblasts allows them to double indefinitely, escaping the Hayflick limit (Bodnar et al. 1998...CLASSIFICATION OF ABSTRACT Unclassified NSN 7540-01-280-5500 15. NUMBER OF PAGES 10 16. PRICE CODE 20. LIMITATION OF ABSTRACT Unlimited Standard

  3. Cloning Components of Human Telomerase.

    DTIC Science & Technology

    1998-07-01

    absent, and the cells are unable to double further. Somatic cells have a limited replicative capacity ( Hayflick 1961), and the lack of telomerase... Hayflick limit (Bodnar et al. 1998). Immortal cells must have a method of maintaining telomeres, and indeed it has been found that immortalized cell lines...THIS PAGE Unclassified 19. SECURITY CLASSIFICATION OF ABSTRACT Unclassified 15. NUMBER OF PAGES 12 16. PRICE CODE 20. LIMITATION OF ABSTRACT

  4. Two high-density recording methods with run-length limited turbo code for holographic data storage system

    NASA Astrophysics Data System (ADS)

    Nakamura, Yusuke; Hoshizawa, Taku

    2016-09-01

    Two methods for increasing the data capacity of a holographic data storage system (HDSS) were developed. The first method is called “run-length-limited (RLL) high-density recording”. An RLL modulation has the same effect as enlarging the pixel pitch; namely, it optically reduces the hologram size. Accordingly, the method doubles the raw-data recording density. The second method is called “RLL turbo signal processing”. The RLL turbo code consists of \\text{RLL}(1,∞ ) trellis modulation and an optimized convolutional code. The remarkable point of the developed turbo code is that it employs the RLL modulator and demodulator as parts of the error-correction process. The turbo code improves the capability of error correction more than a conventional LDPC code, even though interpixel interference is generated. These two methods will increase the data density 1.78-fold. Moreover, by simulation and experiment, a data density of 2.4 Tbit/in.2 is confirmed.

  5. Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Tan, Vincent Y. F.

    2015-08-01

    We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.

  6. The frontal eye fields limit the capacity of visual short-term memory in rhesus monkeys.

    PubMed

    Lee, Kyoung-Min; Ahn, Kyung-Ha

    2013-01-01

    The frontal eye fields (FEF) in rhesus monkeys have been implicated in visual short-term memory (VSTM) as well as control of visual attention. Here we examined the importance of the area in the VSTM capacity and the relationship between VSTM and attention, using the chemical inactivation technique and multi-target saccade tasks with or without the need of target-location memory. During FEF inactivation, serial saccades to targets defined by color contrast were unaffected, but saccades relying on short-term memory were impaired when the target count was at the capacity limit of VSTM. The memory impairment was specific to the FEF-coded retinotopic locations, and subject to competition among targets distributed across visual fields. These results together suggest that the FEF plays a crucial role during the entry of information into VSTM, by enabling attention deployment on targets to be remembered. In this view, the memory capacity results from the limited availability of attentional resources provided by FEF: The FEF can concurrently maintain only a limited number of activations to register the targets into memory. When lesions render part of the area unavailable for activation, the number would decrease, further reducing the capacity of VSTM.

  7. Evaluation of large girth LDPC codes for PMD compensation by turbo equalization.

    PubMed

    Minkov, Lyubomir L; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Kueppers, Franko

    2008-08-18

    Large-girth quasi-cyclic LDPC codes have been experimentally evaluated for use in PMD compensation by turbo equalization for a 10 Gb/s NRZ optical transmission system, and observing one sample per bit. Net effective coding gain improvement for girth-10, rate 0.906 code of length 11936 over maximum a posteriori probability (MAP) detector for differential group delay of 125 ps is 6.25 dB at BER of 10(-6). Girth-10 LDPC code of rate 0.8 outperforms the girth-10 code of rate 0.906 by 2.75 dB, and provides the net effective coding gain improvement of 9 dB at the same BER. It is experimentally determined that girth-10 LDPC codes of length around 15000 approach channel capacity limit within 1.25 dB.

  8. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  9. 47 CFR 22.901 - Cellular service requirements and limitations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (“AMPS”) to cellular telephones designed in conformance with the specifications contained in sections 1.../federal_register/code_of_federal_regulations/ibr_locations.html. (2) Provide AMPS, upon request, to... that the quality of AMPS provided, in terms of geographic coverage and traffic capacity, is fully...

  10. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  11. Development of Working Memory for Verbal-Spatial Associations

    ERIC Educational Resources Information Center

    Cowan, Nelson; Saults, J. Scott; Morey, Candice C.

    2006-01-01

    Verbal-to-spatial associations in working memory may index a core capacity for abstract information limited in the amount concurrently retained. However, what look like associative, abstract representations could instead reflect verbal and spatial codes held separately and then used in parallel. We investigated this issue in two experiments on…

  12. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  13. Combustor Computations for CO2-Neutral Aviation

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Brankovic, Andreja; Ryder, Robert C.; Huber, Marcia

    2011-01-01

    Knowing the pure component C(sub p)(sup 0) or mixture C(sub p) (sup 0) as computed by a flexible code such as NIST-STRAPP or McBride-Gordon, one can, within reasonable accuracy, determine the thermophysical properties necessary to predict the combustion characteristics when there are no tabulated or computed data for those fluid mixtures 3or limited results for lower temperatures. (Note: C(sub p) (sup 0) is molar heat capacity at constant pressure.) The method can be used in the determination of synthetic and biological fuels and blends using the NIST code to compute the C(sub p) (sup 0) of the mixture. In this work, the values of the heat capacity were set at zero pressure, which provided the basis for integration to determine the required combustor properties from the injector to the combustor exit plane. The McBride-Gordon code was used to determine the heat capacity at zero pressure over a wide range of temperatures (room to 6,000 K). The selected fluids were Jet-A, 224TMP (octane), and C12. It was found that each heat capacity loci were form-similar. It was then determined that the results [near 400 to 3,000 K] could be represented to within acceptable engineering accuracy with the simplified equation C(sub p) (sup 0) = A/T + B, where A and B are fluid-dependent constants and T is temperature (K).

  14. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun

    2004-05-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less

  15. Optimal superdense coding over memory channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadman, Z.; Kampermann, H.; Bruss, D.

    2011-10-15

    We study the superdense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and nonunitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The superdense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where nonunitary encoding leads to an improvement in the superdense coding capacity.

  16. FPGA-based LDPC-coded APSK for optical communication systems.

    PubMed

    Zou, Ding; Lin, Changyu; Djordjevic, Ivan B

    2017-02-20

    In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.

  17. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    PubMed

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  18. Comparison of FDMA and CDMA for second generation land-mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Yongacoglu, A.; Lyons, R. G.; Mazur, B. A.

    1990-01-01

    Code Division Multiple Access (CDMA) and Frequency Division Multiple Access (FDMA) (both analog and digital) systems capacities are compared on the basis of identical link availabilities and physical propagation models. Parameters are optimized for a bandwidth limited, multibeam environment. For CDMA, the benefits of voice activated carriers, antenna discrimination, polarization reuse, return link power control and multipath suppression are included in the analysis. For FDMA, the advantages of bandwidth efficient modulation/coding combinations, voice activated carriers, polarization reuse, beam placement, and frequency staggering were taken into account.

  19. Is QR code an optimal data container in optical encryption systems from an error-correction coding perspective?

    PubMed

    Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2018-01-01

    Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.

  20. Impaired Letter-String Processing in Developmental Dyslexia: What Visual-to-Phonology Code Mapping Disorder?

    ERIC Educational Resources Information Center

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-01-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In…

  1. Perceptually tuned low-bit-rate video codec for ATM networks

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien

    1996-02-01

    In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.

  2. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  3. Quantum coding with finite resources.

    PubMed

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M

    2016-05-09

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances.

  4. Quantum coding with finite resources

    PubMed Central

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M.

    2016-01-01

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995

  5. Materials Genome in Action: Identifying the Performance Limits of Physical Hydrogen Storage

    PubMed Central

    2017-01-01

    The Materials Genome is in action: the molecular codes for millions of materials have been sequenced, predictive models have been developed, and now the challenge of hydrogen storage is targeted. Renewably generated hydrogen is an attractive transportation fuel with zero carbon emissions, but its storage remains a significant challenge. Nanoporous adsorbents have shown promising physical adsorption of hydrogen approaching targeted capacities, but the scope of studies has remained limited. Here the Nanoporous Materials Genome, containing over 850 000 materials, is analyzed with a variety of computational tools to explore the limits of hydrogen storage. Optimal features that maximize net capacity at room temperature include pore sizes of around 6 Å and void fractions of 0.1, while at cryogenic temperatures pore sizes of 10 Å and void fractions of 0.5 are optimal. Our top candidates are found to be commercially attractive as “cryo-adsorbents”, with promising storage capacities at 77 K and 100 bar with 30% enhancement to 40 g/L, a promising alternative to liquefaction at 20 K and compression at 700 bar. PMID:28413259

  6. Implementing controlled-unitary operations over the butterfly network

    NASA Astrophysics Data System (ADS)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.; Murao, Mio

    2014-12-01

    We introduce a multiparty quantum computation task over a network in a situation where the capacities of both the quantum and classical communication channels of the network are limited and a bottleneck occurs. Using a resource setting introduced by Hayashi [1], we present an efficient protocol for performing controlled-unitary operations between two input nodes and two output nodes over the butterfly network, one of the most fundamental networks exhibiting the bottleneck problem. This result opens the possibility of developing a theory of quantum network coding for multiparty quantum computation, whereas the conventional network coding only treats multiparty quantum communication.

  7. Implementing controlled-unitary operations over the butterfly network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soeda, Akihito; Kinjo, Yoshiyuki; Turner, Peter S.

    2014-12-04

    We introduce a multiparty quantum computation task over a network in a situation where the capacities of both the quantum and classical communication channels of the network are limited and a bottleneck occurs. Using a resource setting introduced by Hayashi [1], we present an efficient protocol for performing controlled-unitary operations between two input nodes and two output nodes over the butterfly network, one of the most fundamental networks exhibiting the bottleneck problem. This result opens the possibility of developing a theory of quantum network coding for multiparty quantum computation, whereas the conventional network coding only treats multiparty quantum communication.

  8. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks †

    PubMed Central

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-01-01

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels. PMID:26131675

  9. Coded Cooperation for Multiway Relaying in Wireless Sensor Networks.

    PubMed

    Si, Zhongwei; Ma, Junyang; Thobaben, Ragnar

    2015-06-29

    Wireless sensor networks have been considered as an enabling technology for constructing smart cities. One important feature of wireless sensor networks is that the sensor nodes collaborate in some manner for communications. In this manuscript, we focus on the model of multiway relaying with full data exchange where each user wants to transmit and receive data to and from all other users in the network. We derive the capacity region for this specific model and propose a coding strategy through coset encoding. To obtain good performance with practical codes, we choose spatially-coupled LDPC (SC-LDPC) codes for the coded cooperation. In particular, for the message broadcasting from the relay, we construct multi-edge-type (MET) SC-LDPC codes by repeatedly applying coset encoding. Due to the capacity-achieving property of the SC-LDPC codes, we prove that the capacity region can theoretically be achieved by the proposed MET SC-LDPC codes. Numerical results with finite node degrees are provided, which show that the achievable rates approach the boundary of the capacity region in both binary erasure channels and additive white Gaussian channels.

  10. Limited capacity in US pediatric drug trials: qualitative analysis of expert interviews.

    PubMed

    Wasserman, Richard; Bocian, Alison; Harris, Donna; Slora, Eric

    2011-04-01

    The recently renewed Best Pharmaceuticals for Children and Pediatric Research Equity Acts (BPCA/PREA) have continued industry incentives and opportunities for pediatric drug trials (PDTs). However, there is no current assessment of the capacity to perform PDTs. The aim of this study was to deepen understanding of the capacity for US PDTs by assessing PDT infrastructure, present barriers to PDTs, and potential approaches and solutions to identified issues. Pediatric clinical research experts participated in semi-structured interviews on current US pediatric research capacity (February-July 2007). An initial informant list was developed using purposive sampling, and supplemented and refined to generate a group of respondents to explore emerging themes. Each phone interview included a physician researcher and two health researchers who took notes and recorded the calls. Health researchers produced detailed summaries, which were verified by the physician researcher and informants. We then undertook qualitative analysis of the summaries, employing multiple coding, with the two health researchers and the physician researcher independently coding each summary for themes and subthemes. Coding variations were resolved by physician researcher/health researcher discussion and consensus achieved on themes and subthemes. The 33 informants' primary or secondary roles included academia (n = 21), federal official (5), industry medical officer (8), pediatric research network leader (10), pediatric specialist leader (8), pediatric clinical pharmacologist (5), and practitioner/research site director (9). While most experts noted an increase in PDTs since the initial passage of BPCA/PREA, a dominant theme of insufficient US PDT capacity emerged. Subthemes included (i) lack of systems for finding, incentivizing, and/or maintaining trial sites; (ii) complexity/demands of conducting PDTs in clinical settings; (iii) inadequate numbers of qualified pediatric pharmacologists and clinician investigators trained in FDA Good Clinical Practice; and (iv) poor PDT protocol design resulting in operational and enrollment difficulties in the pediatric population. Suggested potential solutions for insufficient PDT capacity included (i) consensus-building among stakeholders to create PDT systems; (ii) initiatives to train more pediatric pharmacologists and educate clinicians in Good Clinical Practice; (iii) advocacy for PDT protocols designed by individuals sensitive to pediatric issues; and (iv) physician and public education on the importance of PDTs. Insufficient US PDT capacity may hinder the development of new drugs for children and limit studies on the safety and efficacy of drugs presently used to treat pediatric conditions. Further public policy initiatives may be needed to achieve the full promise of BPCA/PREA.

  11. Capacity Maximizing Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  12. Topics in quantum cryptography, quantum error correction, and channel simulation

    NASA Astrophysics Data System (ADS)

    Luo, Zhicheng

    In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.

  13. Relating quantum discord with the quantum dense coding capacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xin; Qiu, Liang, E-mail: lqiu@cumt.edu.cn; Li, Song

    2015-01-15

    We establish the relations between quantum discord and the quantum dense coding capacity in (n + 1)-particle quantum states. A necessary condition for the vanishing discord monogamy score is given. We also find that the loss of quantum dense coding capacity due to decoherence is bounded below by the sum of quantum discord. When these results are restricted to three-particle quantum states, some complementarity relations are obtained.

  14. The "Wow! signal" of the terrestrial genetic code

    NASA Astrophysics Data System (ADS)

    shCherbak, Vladimir I.; Makukov, Maxim A.

    2013-05-01

    It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like "signal" in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible ways of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to pass non-biological information.

  15. Experimental realization of the analogy of quantum dense coding in classical optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhenwei; Sun, Yifan; Li, Pengyun

    2016-06-15

    We report on the experimental realization of the analogy of quantum dense coding in classical optical communication using classical optical correlations. Compared to quantum dense coding that uses pairs of photons entangled in polarization, we find that the proposed design exhibits many advantages. Considering that it is convenient to realize in optical communication, the attainable channel capacity in the experiment for dense coding can reach 2 bits, which is higher than that of the usual quantum coding capacity (1.585 bits). This increased channel capacity has been proven experimentally by transmitting ASCII characters in 12 quaternary digitals instead of the usualmore » 24 bits.« less

  16. Investigation of fast initialization of spacecraft bubble memory systems

    NASA Technical Reports Server (NTRS)

    Looney, K. T.; Nichols, C. D.; Hayes, P. J.

    1984-01-01

    Bubble domain technology offers significant improvement in reliability and functionality for spacecraft onboard memory applications. In considering potential memory systems organizations, minimization of power in high capacity bubble memory systems necessitates the activation of only the desired portions of the memory. In power strobing arbitrary memory segments, a capability of fast turn on is required. Bubble device architectures, which provide redundant loop coding in the bubble devices, limit the initialization speed. Alternate initialization techniques are investigated to overcome this design limitation. An initialization technique using a small amount of external storage is demonstrated.

  17. Facile and High-Throughput Synthesis of Functional Microparticles with Quick Response Codes.

    PubMed

    Ramirez, Lisa Marie S; He, Muhan; Mailloux, Shay; George, Justin; Wang, Jun

    2016-06-01

    Encoded microparticles are high demand in multiplexed assays and labeling. However, the current methods for the synthesis and coding of microparticles either lack robustness and reliability, or possess limited coding capacity. Here, a massive coding of dissociated elements (MiCODE) technology based on innovation of a chemically reactive off-stoichimetry thiol-allyl photocurable polymer and standard lithography to produce a large number of quick response (QR) code microparticles is introduced. The coding process is performed by photobleaching the QR code patterns on microparticles when fluorophores are incorporated into the prepolymer formulation. The fabricated encoded microparticles can be released from a substrate without changing their features. Excess thiol functionality on the microparticle surface allows for grafting of amine groups and further DNA probes. A multiplexed assay is demonstrated using the DNA-grafted QR code microparticles. The MiCODE technology is further characterized by showing the incorporation of BODIPY-maleimide (BDP-M) and Nile Red fluorophores for coding and the use of microcontact printing for immobilizing DNA probes on microparticle surfaces. This versatile technology leverages mature lithography facilities for fabrication and thus is amenable to scale-up in the future, with potential applications in bioassays and in labeling consumer products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  19. Self-organizing feature maps for dynamic control of radio resources in CDMA microcellular networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    1998-03-01

    The application of artificial neural networks to the channel assignment problem for cellular code-division multiple access (CDMA) cellular networks has previously been investigated. CDMA takes advantage of voice activity and spatial isolation because its capacity is only interference limited, unlike time-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacities are bandwidth-limited. Any reduction in interference in CDMA translates linearly into increased capacity. To satisfy the high demands for new services and improved connectivity for mobile communications, microcellular and picocellular systems are being introduced. For these systems, there is a need to develop robust and efficient management procedures for the allocation of power and spectrum to maximize radio capacity. Topology-conserving mappings play an important role in the biological processing of sensory inputs. The same principles underlying Kohonen's self-organizing feature maps (SOFMs) are applied to the adaptive control of radio resources to minimize interference, hence, maximize capacity in direct-sequence (DS) CDMA networks. The approach based on SOFMs is applied to some published examples of both theoretical and empirical models of DS/CDMA microcellular networks in metropolitan areas. The results of the approach for these examples are informally compared to the performance of algorithms, based on Hopfield- Tank neural networks and on genetic algorithms, for the channel assignment problem.

  20. Optimal decoding and information transmission in Hodgkin-Huxley neurons under metabolic cost constraints.

    PubMed

    Kostal, Lubomir; Kobayashi, Ryota

    2015-10-01

    Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Polar codes for achieving the classical capacity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Wilde, Mark

    2012-02-01

    We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.

  2. Reed-Solomon Codes and the Deep Hole Problem

    NASA Astrophysics Data System (ADS)

    Keti, Matt

    In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.

  3. Towards measuring the semantic capacity of a physical medium demonstrated with elementary cellular automata.

    PubMed

    Dittrich, Peter

    2018-02-01

    The organic code concept and its operationalization by molecular codes have been introduced to study the semiotic nature of living systems. This contribution develops further the idea that the semantic capacity of a physical medium can be measured by assessing its ability to implement a code as a contingent mapping. For demonstration and evaluation, the approach is applied to a formal medium: elementary cellular automata (ECA). The semantic capacity is measured by counting the number of ways codes can be implemented. Additionally, a link to information theory is established by taking multivariate mutual information for quantifying contingency. It is shown how ECAs differ in their semantic capacities, how this is related to various ECA classifications, and how this depends on how a meaning is defined. Interestingly, if the meaning should persist for a certain while, the highest semantic capacity is found in CAs with apparently simple behavior, i.e., the fixed-point and two-cycle class. Synergy as a predictor for a CA's ability to implement codes can only be used if context implementing codes are common. For large context spaces with sparse coding contexts synergy is a weak predictor. Concluding, the approach presented here can distinguish CA-like systems with respect to their ability to implement contingent mappings. Applying this to physical systems appears straight forward and might lead to a novel physical property indicating how suitable a physical medium is to implement a semiotic system. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. DNA Barcoding through Quaternary LDPC Codes

    PubMed Central

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6. PMID:26492348

  5. DNA Barcoding through Quaternary LDPC Codes.

    PubMed

    Tapia, Elizabeth; Spetale, Flavio; Krsticevic, Flavia; Angelone, Laura; Bulacio, Pilar

    2015-01-01

    For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10(-2) per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10(-9) at the expense of a rate of read losses just in the order of 10(-6).

  6. A Plastic Temporal Brain Code for Conscious State Generation

    PubMed Central

    Dresp-Langley, Birgitta; Durup, Jean

    2009-01-01

    Consciousness is known to be limited in processing capacity and often described in terms of a unique processing stream across a single dimension: time. In this paper, we discuss a purely temporal pattern code, functionally decoupled from spatial signals, for conscious state generation in the brain. Arguments in favour of such a code include Dehaene et al.'s long-distance reverberation postulate, Ramachandran's remapping hypothesis, evidence for a temporal coherence index and coincidence detectors, and Grossberg's Adaptive Resonance Theory. A time-bin resonance model is developed, where temporal signatures of conscious states are generated on the basis of signal reverberation across large distances in highly plastic neural circuits. The temporal signatures are delivered by neural activity patterns which, beyond a certain statistical threshold, activate, maintain, and terminate a conscious brain state like a bar code would activate, maintain, or inactivate the electronic locks of a safe. Such temporal resonance would reflect a higher level of neural processing, independent from sensorial or perceptual brain mechanisms. PMID:19644552

  7. A thesaurus for a neural population code

    PubMed Central

    Ganmor, Elad; Segev, Ronen; Schneidman, Elad

    2015-01-01

    Information is carried in the brain by the joint spiking patterns of large groups of noisy, unreliable neurons. This noise limits the capacity of the neural code and determines how information can be transmitted and read-out. To accurately decode, the brain must overcome this noise and identify which patterns are semantically similar. We use models of network encoding noise to learn a thesaurus for populations of neurons in the vertebrate retina responding to artificial and natural videos, measuring the similarity between population responses to visual stimuli based on the information they carry. This thesaurus reveals that the code is organized in clusters of synonymous activity patterns that are similar in meaning but may differ considerably in their structure. This organization is highly reminiscent of the design of engineered codes. We suggest that the brain may use this structure and show how it allows accurate decoding of novel stimuli from novel spiking patterns. DOI: http://dx.doi.org/10.7554/eLife.06134.001 PMID:26347983

  8. QR code optical encryption using spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.

    2017-02-01

    Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129  ×  129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.

  9. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  10. Using Third-Party Inspectors in Building Energy Codes Enforcement in India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Sha; Evans, Meredydd; Kumar, Pradeep

    India is experiencing fast income growth and urbanization, and this leads to unprecedented increases in demand for building energy services and resulting energy consumption. In response to rapid growth in building energy use, the Government of India issued the Energy Conservation Building Code (ECBC) in 2007, which is consistent with and based on the 2001 Energy Conservation Act. ECBC implementation has been voluntary since its enactment and a few states have started to make progress towards mandatory implementation. Rajasthan is the first state in India to adopt ECBC as a mandatory code. The State adopted ECBC with minor additions onmore » March 28, 2011 through a stakeholder process; it became mandatory in Rajasthan on September 28, 2011. Tamil Nadu, Gujarat, and Andhra Pradesh have started to draft an implementation roadmap and build capacity for its implementation. The Bureau of Energy Efficiency (BEE) plans to encourage more states to adopt ECBC in the near future, including Haryana, Uttar Pradesh, Karnataka, Maharashtra, West Bengal, and Delhi. Since its inception, India has applied the code on a voluntary basis, but the Government of India is developing a strategy to mandate compliance. Implementing ECBC requires coordination between the Ministry of Power and the Ministry of Urban Development at the national level as well as interdepartmental coordination at the state level. One challenge is that the Urban Local Bodies (ULBs), the enforcement entities of building by-laws, lack capacity to implement ECBC effectively. For example, ULBs in some states might find the building permitting procedures to be too complex; in other cases, lack of awareness and technical knowledge on ECBC slows down the amendment of local building by-laws as well as ECBC implementation. The intent of this white paper is to share with Indian decision-makers code enforcement approaches: through code officials, third-party inspectors, or a hybrid approach. Given the limited capacity and human resources available in the state and local governments, involving third-party inspectors could rapidly expand the capacity for plan reviews and broad implementation. However, the procedures of involving third-parties need to be carefully designed in order to guarantee a fair process. For example, there should be multiple checks and certification requirements for third-party inspectors, and the government should have the final approval when third-party inspectors are used in a project. This paper discusses different approaches of involving third-parties in ECBC enforcement; the Indian states may choose the approaches that work best in their given circumstances.« less

  11. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  12. A novel all-optical label processing for OPS networks based on multiple OOC sequences from multiple-groups OOC

    NASA Astrophysics Data System (ADS)

    Qiu, Kun; Zhang, Chongfu; Ling, Yun; Wang, Yibo

    2007-11-01

    This paper proposes an all-optical label processing scheme using multiple optical orthogonal codes sequences (MOOCS) for optical packet switching (OPS) (MOOCS-OPS) networks, for the first time to the best of our knowledge. In this scheme, the multiple optical orthogonal codes (MOOC) from multiple-groups optical orthogonal codes (MGOOC) are permuted and combined to obtain the MOOCS for the optical labels, which are used to effectively enlarge the capacity of available optical codes for optical labels. The optical label processing (OLP) schemes are reviewed and analyzed, the principles of MOOCS-based optical labels for OPS networks are given, and analyzed, then the MOOCS-OPS topology and the key realization units of the MOOCS-based optical label packets are studied in detail, respectively. The performances of this novel all-optical label processing technology are analyzed, the corresponding simulation is performed. These analysis and results show that the proposed scheme can overcome the lack of available optical orthogonal codes (OOC)-based optical labels due to the limited number of single OOC for optical label with the short code length, and indicate that the MOOCS-OPS scheme is feasible.

  13. Low-density parity-check codes for volume holographic memory systems.

    PubMed

    Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali

    2003-02-10

    We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.

  14. The experimental verification on the shear bearing capacity of exposed steel column foot

    NASA Astrophysics Data System (ADS)

    Xijin, LIU

    2017-04-01

    In terms of the shear bearing capacity of the exposed steel column foot, there are many researches both home and abroad. However, the majority of the researches are limited to the theoretical analysis sector and few of them make the experimental analysis. In accordance with the prototype of an industrial plant in Beijing, this paper designs the experimental model. The experimental model is composed of six steel structural members in two groups, with three members without shear key and three members with shear key. The paper checks the shear bearing capacity of two groups respectively under different axial forces. The experiment shows: The anchor bolt of the exposed steel column foot features relatively large shear bearing capacity which could not be neglected. The results deducted through calculation methods proposed by this paper under two situations match the experimental results in terms of the shear bearing capacity of the steel column foot. Besides, it also proposed suggestions on revising the Code for Design of Steel Structure in the aspect of setting the shear key in the steel column foot.

  15. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    PubMed

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Dissociating action-effect activation and effect-based response selection.

    PubMed

    Schwarz, Katharina A; Pfister, Roland; Wirth, Robert; Kunde, Wilfried

    2018-05-25

    Anticipated action effects have been shown to govern action selection and initiation, as described in ideomotor theory, and they have also been demonstrated to determine crosstalk between different tasks in multitasking studies. Such effect-based crosstalk was observed not only in a forward manner (with a first task influencing performance in a following second task) but also in a backward manner (the second task influencing the preceding first task), suggesting that action effect codes can become activated prior to a capacity-limited processing stage often denoted as response selection. The process of effect-based response production, by contrast, has been proposed to be capacity-limited. These observations jointly suggest that effect code activation can occur independently of effect-based response production, though this theoretical implication has not been tested directly at present. We tested this hypothesis by employing a dual-task set-up in which we manipulated the ease of effect-based response production (via response-effect compatibility) in an experimental design that allows for observing forward and backward crosstalk. We observed robust crosstalk effects and response-effect compatibility effects alike, but no interaction between both effects. These results indicate that effect activation can occur in parallel for several tasks, independently of effect-based response production, which is confined to one task at a time. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Materials genome in action: Identifying the performance limits of physical hydrogen storage

    DOE PAGES

    Thornton, Aaron W.; Simon, Cory M.; Kim, Jihan; ...

    2017-03-08

    The Materials Genome is in action: the molecular codes for millions of materials have been sequenced, predictive models have been developed, and now the challenge of hydrogen storage is targeted. Renewably generated hydrogen is an attractive transportation fuel with zero carbon emissions, but its storage remains a significant challenge. Nanoporous adsorbents have shown promising physical adsorption of hydrogen approaching targeted capacities, but the scope of studies has remained limited. Here the Nanoporous Materials Genome, containing over 850 000 materials, is analyzed with a variety of computational tools to explore the limits of hydrogen storage. Optimal features that maximize net capacitymore » at room temperature include pore sizes of around 6 Å and void fractions of 0.1, while at cryogenic temperatures pore sizes of 10 Å and void fractions of 0.5 are optimal. Finally, our top candidates are found to be commercially attractive as “cryo-adsorbents”, with promising storage capacities at 77 K and 100 bar with 30% enhancement to 40 g/L, a promising alternative to liquefaction at 20 K and compression at 700 bar.« less

  18. Materials genome in action: Identifying the performance limits of physical hydrogen storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Aaron W.; Simon, Cory M.; Kim, Jihan

    The Materials Genome is in action: the molecular codes for millions of materials have been sequenced, predictive models have been developed, and now the challenge of hydrogen storage is targeted. Renewably generated hydrogen is an attractive transportation fuel with zero carbon emissions, but its storage remains a significant challenge. Nanoporous adsorbents have shown promising physical adsorption of hydrogen approaching targeted capacities, but the scope of studies has remained limited. Here the Nanoporous Materials Genome, containing over 850 000 materials, is analyzed with a variety of computational tools to explore the limits of hydrogen storage. Optimal features that maximize net capacitymore » at room temperature include pore sizes of around 6 Å and void fractions of 0.1, while at cryogenic temperatures pore sizes of 10 Å and void fractions of 0.5 are optimal. Finally, our top candidates are found to be commercially attractive as “cryo-adsorbents”, with promising storage capacities at 77 K and 100 bar with 30% enhancement to 40 g/L, a promising alternative to liquefaction at 20 K and compression at 700 bar.« less

  19. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  20. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  1. Do humans make good decisions?

    PubMed Central

    Summerfield, Christopher; Tsetsos, Konstantinos

    2014-01-01

    Human performance on perceptual classification tasks approaches that of an ideal observer, but economic decisions are often inconsistent and intransitive, with preferences reversing according to the local context. We discuss the view that suboptimal choices may result from the efficient coding of decision-relevant information, a strategy that allows expected inputs to be processed with higher gain than unexpected inputs. Efficient coding leads to ‘robust’ decisions that depart from optimality but maximise the information transmitted by a limited-capacity system in a rapidly-changing world. We review recent work showing that when perceptual environments are variable or volatile, perceptual decisions exhibit the same suboptimal context-dependence as economic choices, and propose a general computational framework that accounts for findings across the two domains. PMID:25488076

  2. 3DFEMWATER/3DLEWASTE: NUMERICAL CODES FOR DELINEATING WELLHEAD PROTECTION AREAS IN AGRICULTURAL REGIONS BASED ON THE ASSIMILATIVE CAPACITY CRITERION

    EPA Science Inventory

    Two related numerical codes, 3DFEMWATER and 3DLEWASTE, are presented sed to delineate wellhead protection areas in agricultural regions using the assimilative capacity criterion. DFEMWATER (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media) ...

  3. Lung volumes: measurement, clinical use, and coding.

    PubMed

    Flesch, Judd D; Dine, C Jessica

    2012-08-01

    Measurement of lung volumes is an integral part of complete pulmonary function testing. Some lung volumes can be measured during spirometry; however, measurement of the residual volume (RV), functional residual capacity (FRC), and total lung capacity (TLC) requires special techniques. FRC is typically measured by one of three methods. Body plethysmography uses Boyle's Law to determine lung volumes, whereas inert gas dilution and nitrogen washout use dilution properties of gases. After determination of FRC, expiratory reserve volume and inspiratory vital capacity are measured, which allows the calculation of the RV and TLC. Lung volumes are commonly used for the diagnosis of restriction. In obstructive lung disease, they are used to assess for hyperinflation. Changes in lung volumes can also be seen in a number of other clinical conditions. Reimbursement for measurement of lung volumes requires knowledge of current procedural terminology (CPT) codes, relevant indications, and an appropriate level of physician supervision. Because of recent efforts to eliminate payment inefficiencies, the 10 previous CPT codes for lung volumes, airway resistance, and diffusing capacity have been bundled into four new CPT codes.

  4. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  5. Recent Developments in the Application of Biologically Inspired Computation to Chemical Sensing

    NASA Astrophysics Data System (ADS)

    Marco, S.; Gutierrez-Gálvez, A.

    2009-05-01

    Biological olfaction outperforms chemical instrumentation in specificity, response time, detection limit, coding capacity, time stability, robustness, size, power consumption, and portability. This biological function provides outstanding performance due, to a large extent, to the unique architecture of the olfactory pathway, which combines a high degree of redundancy, an efficient combinatorial coding along with unmatched chemical information processing mechanisms. The last decade has witnessed important advances in the understanding of the computational primitives underlying the functioning of the olfactory system. In this work, the state of the art concerning biologically inspired computation for chemical sensing will be reviewed. Instead of reviewing the whole body of computational neuroscience of olfaction, we restrict this review to the application of models to the processing of real chemical sensor data.

  6. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1995-01-01

    This report focuses on the results obtained during the PI's recent sabbatical leave at the Swiss Federal Institute of Technology (ETH) in Zurich, Switzerland, from January 1, 1995 through June 30, 1995. Two projects investigated various properties of TURBO codes, a new form of concatenated coding that achieves near channel capacity performance at moderate bit error rates. The performance of TURBO codes is explained in terms of the code's distance spectrum. These results explain both the near capacity performance of the TURBO codes and the observed 'error floor' for moderate and high signal-to-noise ratios (SNR's). A semester project, entitled 'The Realization of the Turbo-Coding System,' involved a thorough simulation study of the performance of TURBO codes and verified the results claimed by previous authors. A copy of the final report for this project is included as Appendix A. A diploma project, entitled 'On the Free Distance of Turbo Codes and Related Product Codes,' includes an analysis of TURBO codes and an explanation for their remarkable performance. A copy of the final report for this project is included as Appendix B.

  7. TRAC-PF1/MOD1 pretest predictions of MIST experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B.E.; Steiner, J.L.; Siebe, D.A.

    Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 to provide integral system test data on specific issues and phenomena relevant to post small-break loss-of-coolant accidents (SBLOCAs) in Babcock and Wilcox plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. During Fiscal Year 1986, Los Alamos performed five MIST pretest analyses. The five experiments were chosen on the basis of their potential either to approach the facility limits or to challenge the predictive capability of the TRAC-PF1/MOD1 code. Three SBLOCA tests weremore » examined which included nominal test conditions, throttled auxiliary feedwater and asymmetric steam-generator cooldown, and reduced high-pressure-injection (HPI) capacity, respectively. Also analyzed were two ''feed-and-bleed'' cooling tests with reduced HPI and delayed HPI initiation. Results of the tests showed that the MIST facility limits would not be approached in the five tests considered. Early comparisons with preliminary test data indicate that the TRAC-PF1/MOD1 code is correctly calculating the dominant phenomena occurring in the MIST facility during the tests. Posttest analyses are planned to provide a quantitative assessment of the code's ability to predict MIST transients.« less

  8. Building Energy Efficiency in Rural China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Yu, Sha; Song, Bo

    2014-04-01

    Rural buildings in China now account for more than half of China’s total building energy use. Forty percent of the floorspace in China is in rural villages and towns. Most of these buildings are very energy inefficient, and may struggle to meet basic needs. They are cold in the winter, and often experience indoor air pollution from fuel use. The Chinese government plans to adopt a voluntary building energy code, or design standard, for rural homes. The goal is to build on China’s success with codes in urban areas to improve efficiency and comfort in rural homes. The Chinese governmentmore » recognizes rural buildings represent a major opportunity for improving national building energy efficiency. The challenges of rural China are also greater than those of urban areas in many ways because of the limited local capacity and low income levels. The Chinese government wants to expand on new programs to subsidize energy efficiency improvements in rural homes to build capacity for larger-scale improvement. This article summarizes the trends and status of rural building energy use in China. It then provides an overview of the new rural building design standard, and describes options and issues to move forward with implementation.« less

  9. Expanding Capacity and Promoting Inclusion in Introductory Computer Science: A Focus on Near-Peer Mentor Preparation and Code Review

    ERIC Educational Resources Information Center

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…

  10. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.

    PubMed

    Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland

    2011-04-08

    In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  11. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme

    PubMed Central

    2011-01-01

    Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310

  12. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  13. Capacity Evaluations of Psychiatric Patients Requesting Assisted Death in the Netherlands

    PubMed Central

    Doernberg, Samuel N.; Peteet, John R.; Kim, Scott Y.H.

    2016-01-01

    Objective Euthanasia or physician-assisted suicide (EAS) of psychiatric patients is legal in some countries but remains controversial. This study examined a frequently raised concern about the practice: how physicians address the issue of decision-making capacity of persons requesting psychiatric EAS. Methods A review of psychiatric EAS case summaries published by the Dutch Regional Euthanasia Review Committees. Directed content analysis using a capacity-specific 4 abilities model (understanding of facts, applying those facts to self, weighing/reasoning, and evidencing choice) was used to code texts discussing capacity. 66 cases from 2011-2014 were reviewed. Results In 55% (36 of 66) of cases the capacity-specific discussion consisted of only global judgments of patients’ capacity, even in patients with psychotic disorders. 32% (21 of 66) of cases included evidentiary statements regarding capacity-specific abilities; only 5 cases (8%) mentioned all four abilities. Physicians frequently stated that psychosis or depression did or did not impact capacity but provided little explanation regarding their judgments. Physicians in 8 cases (12%) disagreed about capacity; even when no explanation is given for the disagreement, the review committees generally accepted the judgment of the physician performing EAS. In one case, the physicians noted that not all capacity-specific abilities were intact but deemed the patient capable. Conclusion Case summaries of psychiatric EAS in the Netherlands do not show that a high threshold of capacity is required for granting EAS. Although this may reflect limitations in documentation, it likely represents a practice that reflects the normative position of the review committees. PMID:27590345

  14. A severe capacity limit in the consolidation of orientation information into visual short-term memory.

    PubMed

    Becker, Mark W; Miller, James R; Liu, Taosheng

    2013-04-01

    Previous research has suggested that two color patches can be consolidated into visual short-term memory (VSTM) via an unlimited parallel process. Here we examined whether the same unlimited-capacity parallel process occurs for two oriented grating patches. Participants viewed two gratings that were presented briefly and masked. In blocks of trials, the gratings were presented either simultaneously or sequentially. In Experiments 1 and 2, the presentation of the stimuli was followed by a location cue that indicated the grating on which to base one's response. In Experiment 1, participants responded whether the target grating was oriented clockwise or counterclockwise with respect to vertical. In Experiment 2, participants indicated whether the target grating was oriented along one of the cardinal directions (vertical or horizontal) or was obliquely oriented. Finally, in Experiment 3, the location cue was replaced with a third grating that appeared at fixation, and participants indicated whether either of the two test gratings matched this probe. Despite the fact that these responses required fairly coarse coding of the orientation information, across all methods of responding we found superior performance for sequential over simultaneous presentations. These findings suggest that the consolidation of oriented gratings into VSTM is severely limited in capacity and differs from the consolidation of color information.

  15. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  16. Experimental demonstration of entanglement-assisted coding using a two-mode squeezed vacuum state

    NASA Astrophysics Data System (ADS)

    Mizuno, Jun; Wakui, Kentaro; Furusawa, Akira; Sasaki, Masahide

    2005-01-01

    We have experimentally realized the scheme initially proposed as quantum dense coding with continuous variables [

    Ban, J. Opt. B: Quantum Semiclassical Opt. 1, L9 (1999)
    ;
    Braunstein and Kimble, Phys. Rev. A 61, 042302 (2000)
    ]. In our experiment, a pair of EPR (Einstein-Podolsky-Rosen) beams is generated from two independent squeezed vacua. After adding a two-quadrature signal to one of the EPR beams, two squeezed beams that contain the signal were recovered. Although our squeezing level is not sufficient to demonstrate the channel capacity gain over the Holevo limit of a single-mode channel without entanglement, our channel is superior to conventional channels such as coherent and squeezing channels. In addition, the optical addition and subtraction processes demonstrated are elementary operations of universal quantum information processing on continuous variables.

  17. Efficient Signal, Code, and Receiver Designs for MIMO Communication Systems

    DTIC Science & Technology

    2003-06-01

    167 5-31 Concatenation of a tilted-QAM inner code with an LDPC outer code with a two component iterative soft-decision decoder. . . . . . . . . 168 5...for AWGN channels has long been studied. There are well-known soft-decision codes like the turbo codes and LDPC codes that can approach capacity to...bits) low density parity check ( LDPC ) code 1. 2. The coded bits are randomly interleaved so that bits nearby go through different sub-channels, and are

  18. Experimental study of an optimized PSP-OSTBC scheme with m-PPM in ultraviolet scattering channel for optical MIMO system.

    PubMed

    Han, Dahai; Gu, Yanjie; Zhang, Min

    2017-08-10

    An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.

  19. A proposal for seismic evaluation index of mid-rise existing RC buildings in Afghanistan

    NASA Astrophysics Data System (ADS)

    Naqi, Ahmad; Saito, Taiki

    2017-10-01

    Mid-rise RC buildings gradually rise in Kabul and entire Afghanistan since 2001 due to rapid increase of population. To protect the safety of resident, Afghan Structure Code was issued in 2012. But the building constructed before 2012 failed to conform the code requirements. In Japan, new sets of rules and law for seismic design of buildings had been issued in 1981 and severe earthquake damage was disclosed for the buildings designed before 1981. Hence, the Standard for Seismic Evaluation of RC Building published in 1977 has been widely used in Japan to evaluate the seismic capacity of existing buildings designed before 1981. Currently similar problem existed in Afghanistan, therefore, this research examined the seismic capacity of six RC buildings which were built before 2012 in Kabul by applying the seismic screening procedure presented by Japanese standard. Among three screening procedures with different capability, the less detailed screening procedure, the first level of screening, is applied. The study founds an average seismic index (IS-average=0.21) of target buildings. Then, the results were compared with those of more accurate seismic evaluation procedures of Capacity Spectrum Method (CSM) and Time History Analysis (THA). The results for CSM and THA show poor seismic performance of target buildings not able to satisfy the safety design limit (1/100) of the maximum story drift. The target buildings are then improved by installing RC shear walls. The seismic indices of these retrofitted buildings were recalculated and the maximum story drifts were analyzed by CSM and THA. The seismic indices and CSM and THA results are compared and found that building with seismic index larger than (IS-average =0.4) are able to satisfy the safety design limit. Finally, to screen and minimize the earthquake damage over the existing buildings, the judgement seismic index (IS-Judgment=0.5) for the first level of screening is proposed.

  20. Coherent state coding approaches the capacity of non-Gaussian bosonic channels

    NASA Astrophysics Data System (ADS)

    Huber, Stefan; König, Robert

    2018-05-01

    The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.

  1. Properties of a certain stochastic dynamical system, channel polarization, and polar codes

    NASA Astrophysics Data System (ADS)

    Tanaka, Toshiyuki

    2010-06-01

    A new family of codes, called polar codes, has recently been proposed by Arikan. Polar codes are of theoretical importance because they are provably capacity achieving with low-complexity encoding and decoding. We first discuss basic properties of a certain stochastic dynamical system, on the basis of which properties of channel polarization and polar codes are reviewed, with emphasis on our recent results.

  2. Final Report: Laboratory Development of a High Capacity Gas-Fired Paper Dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaroslav Chudnovsky; Aleksandr Kozlov; Lester Sherrow

    2005-09-30

    Paper drying is the most energy-intensive and temperature-critical aspect of papermaking. It is estimated that about 67% of the total energy required in papermaking is used to dry paper. The conventional drying method uses a series of steam-heated metal cylinders that are required to meet ASME codes for pressure vessels, which limits the steam pressure to about 160 psig. Consequently, the shell temperature and the drying capacity are also limited. Gas Technology Institute together with Boise Paper Solutions, Groupe Laperrier and Verreault (GL&V) USA Inc., Flynn Burner Corporation and with funding support from the U.S. Department of Energy, U.S. naturalmore » gas industry, and Gas Research Institute is developing a high efficiency gas-fired paper dryer based on a combination of a ribbon burner and advanced heat transfer enhancement technique. The Gas-Fired Paper Dryer (GFPD) is a high-efficiency alternative to conventional steam-heated drying drums that typically operate at surface temperatures in the 300 deg F range. The new approach was evaluated in laboratory and pilot-scale testing at the Western Michigan University Paper Pilot Plant. Drum surface temperatures of more than 400 deg F were reached with linerboard (basis weight 126 lb/3000 ft2) production and resulted in a 4-5 times increase in drying rate over a conventional steam-heated drying drum. Successful GFPD development and commercialization will provide large energy savings to the paper industry and increase paper production rates from dryer-limited (space- or steam-limited) paper machines by an estimated 10 to 20%, resulting in significant capital costs savings for both retrofits and new capacity.« less

  3. Laboratory Development of A High Capacity Gas-Fired paper Dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chudnovsky, Yaroslav; Kozlov, Aleksandr; Sherrow, Lester

    2005-09-30

    Paper drying is the most energy-intensive and temperature-critical aspect of papermaking. It is estimated that about 67% of the total energy required in papermaking is used to dry paper. The conventional drying method uses a series of steam-heated metal cylinders that are required to meet ASME codes for pressure vessels, which limits the steam pressure to about 160 psig. Consequently, the shell temperature and the drying capacity are also limited. Gas Technology Institute together with Boise Paper Solutions, Groupe Laparrier and Verreault (GL&V) USA Inc., Flynn Burner Corporation and with funding support from the U.S. Department of Energy, U.S. naturalmore » gas industry, and Gas Research Institute is developing a high efficiency gas-fired paper dryer based on a combination of a ribbon burner and advanced heat transfer enhancement technique. The Gas-Fired Paper Dryer (GFPD) is a high-efficiency alternative to conventional steam-heated drying drums that typically operate at surface temperatures in the 300ºF range. The new approach was evaluated in laboratory and pilot-scale testing at the Western Michigan University Paper Pilot Plant. Drum surface temperatures of more than 400ºF were reached with linerboard (basis weight 126 lb/3000 ft2) production and resulted in a 4-5 times increase in drying rate over a conventional steam-heated drying drum. Successful GFPD development and commercialization will provide large energy savings to the paper industry and increase paper production rates from dryer-limited (space- or steam-limited) paper machines by an estimated 10 to 20%, resulting in significant capital costs savings for both retrofits and new capacity.« less

  4. On the optimum signal constellation design for high-speed optical transport networks.

    PubMed

    Liu, Tao; Djordjevic, Ivan B

    2012-08-27

    In this paper, we first describe an optimum signal constellation design algorithm, which is optimum in MMSE-sense, called MMSE-OSCD, for channel capacity achieving source distribution. Secondly, we introduce a feedback channel capacity inspired optimum signal constellation design (FCC-OSCD) to further improve the performance of MMSE-OSCD, inspired by the fact that feedback channel capacity is higher than that of systems without feedback. The constellations obtained by FCC-OSCD are, however, OSNR dependent. The optimization is jointly performed together with regular quasi-cyclic low-density parity-check (LDPC) code design. Such obtained coded-modulation scheme, in combination with polarization-multiplexing, is suitable as both 400 Gb/s and multi-Tb/s optical transport enabling technology. Using large girth LDPC code, we demonstrate by Monte Carlo simulations that a 32-ary signal constellation, obtained by FCC-OSCD, outperforms previously proposed optimized 32-ary CIPQ signal constellation by 0.8 dB at BER of 10(-7). On the other hand, the LDPC-coded 16-ary FCC-OSCD outperforms 16-QAM by 1.15 dB at the same BER.

  5. A narrowband CDMA communications payload for little LEOS applications

    NASA Astrophysics Data System (ADS)

    Michalik, H.; Hävecker, W.; Ginati, A.

    1996-09-01

    In recent years Code Division Multiple Access (CDMA) techniques have been investigated for application in Local Area Networks [J. A. Salehi, IEEE Trans. Commun. 37 (1989)]as well as in Mobile Communications [R. Kohno et al., IEEE Commun. Mag. Jan (1995)]. The main attraction of these techniques is due to potential higher throughput and capacity of such systems under certain conditions compared to conventional multi-access schemes like frequency and time division multiplexing. Mobile communication over a Satellite Link represents in some terms the "worst case" for operating a CDMA-system. Considering e.g. the uplink case from mobile to satellite, the imperfections due to different and time varying channel conditions will add to the well known effects of Multiple Access Interference (MAI) between the simultaneously active users at the satellite receiver. In addition, bandwidth constraints due to the non-availability of large bandwidth channels in the interesting frequency bands, exist for small systems. As a result, for a given service in terms of user data rates, the practical code sequence lengths are limited as well as the available number of codes within a code set. In this paper a communications payload for Small Satellite Applications with CDMA uplink and C/TDMA downlink under the constraint of bandwidth limitations is proposed. To optimise the performance under the above addressed imperfections the system provides ability for power control and synchronisation for the CDMA uplink. The major objectives of this project are studying, development and testing of such a system for educational purposes and technology development at Hochschule Bremen.

  6. [Implications of mental image processing in the deficits of verbal information coding during normal aging].

    PubMed

    Plaie, Thierry; Thomas, Delphine

    2008-06-01

    Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  7. Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues

    NASA Astrophysics Data System (ADS)

    Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.

    Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.

  8. A Comparative Study on Safe Pile Capacity as Shown in Table 1 of IS 2911 (Part III): 1980

    NASA Astrophysics Data System (ADS)

    Pakrashi, Somdev

    2017-06-01

    Code of practice for design and construction of under reamed pile foundations: IS 2911 (Part-III)—1980 presents one table in respect of safe load for bored cast in situ under reamed piles in sandy and clayey soils including black cotton soils, stem dia. of pile ranging from 20 to 50 cm and its effective length being 3.50 m. A comparative study, was taken up by working out safe pile capacity for one 400 dia., 3.5 m long bored cast in situ under reamed pile based on subsoil properties obtained from soil investigation work as well as subsoil properties of different magnitudes of clayey, sandy soils and comparing the same with the safe pile capacity shown in Table 1 of that IS Code. The study reveals that safe pile capacity computed from subsoil properties, barring a very few cases, considerably differs from that shown in the aforesaid code and looks forward for more research work and study to find out a conclusive explanation of this probable anomaly.

  9. Development of a cryogenic mixed fluid J-T cooling computer code, 'JTMIX'

    NASA Technical Reports Server (NTRS)

    Jones, Jack A.

    1991-01-01

    An initial study was performed for analyzing and predicting the temperatures and cooling capacities when mixtures of fluids are used in Joule-Thomson coolers and in heat pipes. A computer code, JTMIX, was developed for mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer code, DDMIX, it has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60 K when neon is added.

  10. Neuronal foundations of human numerical representations.

    PubMed

    Eger, E

    2016-01-01

    The human species has developed complex mathematical skills which likely emerge from a combination of multiple foundational abilities. One of them seems to be a preverbal capacity to extract and manipulate the numerosity of sets of objects which is shared with other species and in humans is thought to be integrated with symbolic knowledge to result in a more abstract representation of numerical concepts. For what concerns the functional neuroanatomy of this capacity, neuropsychology and functional imaging have localized key substrates of numerical processing in parietal and frontal cortex. However, traditional fMRI mapping relying on a simple subtraction approach to compare numerical and nonnumerical conditions is limited to tackle with sufficient precision and detail the issue of the underlying code for number, a question which more easily lends itself to investigation by methods with higher spatial resolution, such as neurophysiology. In recent years, progress has been made through the introduction of approaches sensitive to within-category discrimination in combination with fMRI (adaptation and multivariate pattern recognition), and the present review summarizes what these have revealed so far about the neural coding of individual numbers in the human brain, the format of these representations and parallels between human and monkey neurophysiology findings. © 2016 Elsevier B.V. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lougovski, P.; Uskov, D. B.

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  12. Achievable rate degradation of ultra-wideband coherent fiber communication systems due to stimulated Raman scattering.

    PubMed

    Semrau, Daniel; Killey, Robert; Bayvel, Polina

    2017-06-12

    As the bandwidths of optical communication systems are increased to maximize channel capacity, the impact of stimulated Raman scattering (SRS) on the achievable information rates (AIR) in ultra-wideband coherent WDM systems becomes significant, and is investigated in this work, for the first time. By modifying the GN-model to account for SRS, it is possible to derive a closed-form expression that predicts the optical signal-to-noise ratio of all channels at the receiver for bandwidths of up to 15 THz, which is in excellent agreement with numerical calculations. It is shown that, with fixed modulation and coding rate, SRS leads to a drop of approximately 40% in achievable information rates for bandwidths higher than 15 THz. However, if adaptive modulation and coding rates are applied across the entire spectrum, this AIR reduction can be limited to only 10%.

  13. Hiding message into DNA sequence through DNA coding and chaotic maps.

    PubMed

    Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman

    2014-09-01

    The paper proposes an improved reversible substitution method to hide data into deoxyribonucleic acid (DNA) sequence, and four measures have been taken to enhance the robustness and enlarge the hiding capacity, such as encode the secret message by DNA coding, encrypt it by pseudo-random sequence, generate the relative hiding locations by piecewise linear chaotic map, and embed the encoded and encrypted message into a randomly selected DNA sequence using the complementary rule. The key space and the hiding capacity are analyzed. Experimental results indicate that the proposed method has a better performance compared with the competing methods with respect to robustness and capacity.

  14. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    DOE PAGES

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  15. The magical number 4 in short-term memory: a reconsideration of mental storage capacity.

    PubMed

    Cowan, N

    2001-02-01

    Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described. Four basic conditions in which chunks can be identified and capacity limits can accordingly be observed are: (1) when information overload limits chunks to individual stimulus items, (2) when other steps are taken specifically to block the recording of stimulus items into larger chunks, (3) in performance discontinuities caused by the capacity limit, and (4) in various indirect effects of the capacity limit. Under these conditions, rehearsal and long-term memory cannot be used to combine stimulus items into chunks of an unknown size; nor can storage mechanisms that are not capacity-limited, such as sensory memory, allow the capacity-limited storage mechanism to be refilled during recall. A single, central capacity limit averaging about four chunks is implicated along with other, noncapacity-limited sources. The pure STM capacity limit expressed in chunks is distinguished from compound STM limits obtained when the number of separately held chunks is unclear. Reasons why pure capacity estimates fall within a narrow range are discussed and a capacity limit for the focus of attention is proposed.

  16. Capacity of optical communications over a lossy bosonic channel with a receiver employing the most general coherent electro-optic feedback control

    NASA Astrophysics Data System (ADS)

    Chung, Hye Won; Guha, Saikat; Zheng, Lizhong

    2017-07-01

    We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.

  17. SeqCompress: an algorithm for biological sequence compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Processing of Visual--Action Codes by Deaf and Hearing Children: Coding Orientation or "M"-Capacity?

    ERIC Educational Resources Information Center

    Todman, John; Cowdy, Natascha

    1993-01-01

    Results from a study in which 25 deaf children and 25 hearing children completed a vocabulary test and a compound stimulus visual information task support the hypothesis that performance on cognitive tasks is dependent on compatibility of task demands with a coding orientation. (SLD)

  19. Research culture in a regional allied health setting.

    PubMed

    Borkowski, Donna; McKinstry, Carol; Cotchett, Matthew

    2017-07-01

    Research evidence is required to guide best practice, inform policy and improve the health of communities. Current indicators consider allied health research culture to be low. This study aimed to measure the allied health research culture and capacity in a Victorian regional health service. The Research Capacity and Culture tool was used to evaluate research capacity and culture across individual, team and organisation domains. One-way ANOVA was used to determine differences between allied health professions, whereas responses to open-ended questions were themed using open coding. One hundred thirty-six allied health professionals completed the survey. There were statistically significant differences in the organisation domain between social work, physiotherapy and occupational therapy professions; in the team domain, between social work and all other professions. Motivators for conducting research included providing a high-quality service, developing skills and increasing job satisfaction. Barriers included other work roles taking priority, a lack of time and limited research skills. Multi-layered strategies including establishing conjoint research positions are recommended to increase allied health research culture in this regional area.

  20. One-way quantum repeaters with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  1. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  2. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    ERIC Educational Resources Information Center

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  3. Error Control Techniques for Satellite and Space Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1996-01-01

    In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).

  4. LSB-based Steganography Using Reflected Gray Code for Color Quantum Images

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Lu, Aiping

    2018-02-01

    At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.

  5. New Trends of Digital Data Storage in DNA

    PubMed Central

    2016-01-01

    With the exponential growth in the capacity of information generated and the emerging need for data to be stored for prolonged period of time, there emerges a need for a storage medium with high capacity, high storage density, and possibility to withstand extreme environmental conditions. DNA emerges as the prospective medium for data storage with its striking features. Diverse encoding models for reading and writing data onto DNA, codes for encrypting data which addresses issues of error generation, and approaches for developing codons and storage styles have been developed over the recent past. DNA has been identified as a potential medium for secret writing, which achieves the way towards DNA cryptography and stenography. DNA utilized as an organic memory device along with big data storage and analytics in DNA has paved the way towards DNA computing for solving computational problems. This paper critically analyzes the various methods used for encoding and encrypting data onto DNA while identifying the advantages and capability of every scheme to overcome the drawbacks identified priorly. Cryptography and stenography techniques have been analyzed in a critical approach while identifying the limitations of each method. This paper also identifies the advantages and limitations of DNA as a memory device and memory applications. PMID:27689089

  6. New Trends of Digital Data Storage in DNA.

    PubMed

    De Silva, Pavani Yashodha; Ganegoda, Gamage Upeksha

    With the exponential growth in the capacity of information generated and the emerging need for data to be stored for prolonged period of time, there emerges a need for a storage medium with high capacity, high storage density, and possibility to withstand extreme environmental conditions. DNA emerges as the prospective medium for data storage with its striking features. Diverse encoding models for reading and writing data onto DNA, codes for encrypting data which addresses issues of error generation, and approaches for developing codons and storage styles have been developed over the recent past. DNA has been identified as a potential medium for secret writing, which achieves the way towards DNA cryptography and stenography. DNA utilized as an organic memory device along with big data storage and analytics in DNA has paved the way towards DNA computing for solving computational problems. This paper critically analyzes the various methods used for encoding and encrypting data onto DNA while identifying the advantages and capability of every scheme to overcome the drawbacks identified priorly. Cryptography and stenography techniques have been analyzed in a critical approach while identifying the limitations of each method. This paper also identifies the advantages and limitations of DNA as a memory device and memory applications.

  7. Long distance quantum communication with quantum Reed-Solomon codes

    NASA Astrophysics Data System (ADS)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang; Jianggroup Team

    We study the construction of quantum Reed Solomon codes from classical Reed Solomon codes and show that they achieve the capacity of quantum erasure channel for multi-level quantum systems. We extend the application of quantum Reed Solomon codes to long distance quantum communication, investigate the local resource overhead needed for the functioning of one-way quantum repeaters with these codes, and numerically identify the parameter regime where these codes perform better than the known quantum polynomial codes and quantum parity codes . Finally, we discuss the implementation of these codes into time-bin photonic states of qubits and qudits respectively, and optimize the performance for one-way quantum repeaters.

  8. On optimal designs of transparent WDM networks with 1 + 1 protection leveraged by all-optical XOR network coding schemes

    NASA Astrophysics Data System (ADS)

    Dao, Thanh Hai

    2018-01-01

    Network coding techniques are seen as the new dimension to improve the network performances thanks to the capability of utilizing network resources more efficiently. Indeed, the application of network coding to the realm of failure recovery in optical networks has been marking a major departure from traditional protection schemes as it could potentially achieve both rapid recovery and capacity improvement, challenging the prevailing wisdom of trading capacity efficiency for speed recovery and vice versa. In this context, the maturing of all-optical XOR technologies appears as a good match to the necessity of a more efficient protection in transparent optical networks. In addressing this opportunity, we propose to use a practical all-optical XOR network coding to leverage the conventional 1 + 1 optical path protection in transparent WDM optical networks. The network coding-assisted protection solution combines protection flows of two demands sharing the same destination node in supportive conditions, paving the way for reducing the backup capacity. A novel mathematical model taking into account the operation of new protection scheme for optimal network designs is formulated as the integer linear programming. Numerical results based on extensive simulations on realistic topologies, COST239 and NSFNET networks, are presented to highlight the benefits of our proposal compared to the conventional approach in terms of wavelength resources efficiency and network throughput.

  9. Best interests of adults who lack capacity part 2: key considerations.

    PubMed

    Griffith, Richard

    Last month's article discussed the key concepts underpinning the notion of best interests. In this article the author discusses the requirements for determining the best interests of an adult who lacks capacity under the provisions of the Mental Capacity Act 2005 and its code of practice (Department for Constitutional Affairs 2007).

  10. Work reintegration and cardiovascular disease: medical and rehabilitation influences.

    PubMed

    O'Hagan, F T; Coutu, M F; Thomas, S G; Mertens, D J

    2012-06-01

    Research into work reintegration following cardiovascular disease onset is limited in its clinical and individual focus. There is no research examining worker experience in context during the return to work process. Qualitative case study method informed by applied ethnography. Worker experience was assessed through longitudinal in-depth interviews with 12 workers returning to work following disabling cardiac illness. Workplace context (Canadian auto manufacturing plant) was assessed through site visits and meetings with stakeholders including occupational health personnel. Data was analyzed using constant comparison and progressive coding. Twelve men (43-63 years) participated in the study. Results revealed that unyielding production demands and performance monitoring pushed worker capacities and caused "insidious stress". Medical reassurance was important in the workers' decisions to return to work and stay on the job but medical restrictions were viewed as having limited relevance owing to limited understanding of work demands. Medical sanction was important for transient absence from the workplace as well as permanent disability. Cardiac rehabilitation programs were beneficial for lifestyle modification and building exercise capacity, but had limited benefit on work reintegration. Occupational health provided monitoring and support during work reintegration. Medical reassurance can be an important influence on worker representations of disease threat. Medical advice as it pertained to work activities was less valued as it lacked considerations of work conditions. Cardiac rehabilitation lacked intensity and relevance to work demands. Occupational health was reassuring for workers and played an important role in developing return to work plans.

  11. Review of Punching Shear Behaviour of Flat Slabs Reinforced with FRP Bars

    NASA Astrophysics Data System (ADS)

    Mohamed, Osama A.; Khattab, Rania

    2017-10-01

    Using Fibre Reinforced Polymer (FRP) bars to reinforce two-way concrete slabs can extend the service life, reduce maintenance cost and improve-life cycle cost efficiency. FRP reinforcing bars are more environmentally friendly alternatives to traditional reinforcing steel. Shear behaviour of reinforced concrete structural members is a complex phenomenon that relies on the development of internal load-carrying mechanisms, the magnitude and combination of which is still a subject of research. Many building codes and design standards provide design formulas for estimation of punching shear capacity of FRP reinforced flat slabs. Building code formulas take into account the effects of the axial stiffness of main reinforcement bars, the ratio of the perimeter of the critical section to the slab effective depth, and the slab thickness on the punching shear capacity of two-way slabs reinforced with FRP bars or grids. The goal of this paper is to compare experimental data published in the literature to the equations offered by building codes for the estimation of punching shear capacity of concrete flat slabs reinforced with FRP bars. Emphasis in this paper is on two North American codes, namely, ACI 440.1R-15 and CSA S806-12. The experimental data covered in this paper include flat slabs reinforced with GFRP, BFRP, and CFRP bars. Both ACI 440.1R-15 and CSA S806-12 are shown to be in good agreement with test results in terms of predicting the punching shear capacity.

  12. Belief propagation decoding of quantum channels by passing quantum messages

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  13. Arylamine N-acetyltransferase (NAT2) mutations and their allelic linkage in unrelated caucasian individuals: Correlation with phenotypic activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cascorbi, I.; Drakoulis, N.; Brockmoeller, J.

    1995-09-01

    The polymorphic arylamine N-acetyltransferase (NAT2; EC2.3.1.5) is supposed to be a susceptibility factor for several drug side effects and certain malignancies. A group of 844 unrelated German subjects was genotyped for their acetylation type, and 563 of them were also phenotyped. Seven mutations of the NAT2 gene were evaluated by allele-specific PCR (mutation 341C to T) and PCR-RFLP for mutations at nt positions 191, 282, 481, 590, 803, and 857. From the mutation pattern eight different alleles, including the wild type coding for rapid acetylation and seven alleles coding for slow phenotype, were determined. Four hundred ninety-seven subjects had amore » genotype of slow acetylation (58.9%; 95% confidence limits 55.5%-62.2%). Phenotypic acetylation capacity was expressed as the ratio of 5-acetylamino-6-formylamino-3-methyluracil and 1-methylxanthine in urine after caffeine intake. Some 6.7% of the cases deviated in genotype and phenotype, but sequencing DNA of these probands revealed no new mutations. Furthermore, linkage pattern of the mutations was always confirmed, as tested in 533 subjects. In vivo acetylation capacity of homozygous wild-type subjects (NAT2{sup *}4/{sup *}4) was significantly higher than in heterozygous genotypes (P = .001). All mutant alleles showed low in vivo acetylation capacities, including the previously not-yet-defined alleles {sup *}5A, {sup *}5C, and {sup *}13. Moreover, distinct slow genotypes differed significantly among each other, as reflected in lower acetylation capacity of {sup *}6A, {sup *}7B, and {sup *}13 alleles than the group of {sup *}5 alleles. The study demonstrated differential phenotypic activity of various NAT2 genes and gives a solid basis for clinical and molecular-epidemiological investigations. 34 refs., 4 figs., 7 tabs.« less

  14. Volume Holographic Storage of Digital Data Implemented in Photorefractive Media

    NASA Astrophysics Data System (ADS)

    Heanue, John Frederick

    A holographic data storage system is fundamentally different from conventional storage devices. Information is recorded in a volume, rather than on a two-dimensional surface. Data is transferred in parallel, on a page-by -page basis, rather than serially. These properties, combined with a limited need for mechanical motion, lead to the potential for a storage system with high capacity, fast transfer rate, and short access time. The majority of previous volume holographic storage experiments have involved direct storage and retrieval of pictorial information. Success in the development of a practical holographic storage device requires an understanding of the performance capabilities of a digital system. This thesis presents a number of contributions toward this goal. A description of light diffraction from volume gratings is given. The results are used as the basis for a theoretical and numerical analysis of interpage crosstalk in both angular and wavelength multiplexed holographic storage. An analysis of photorefractive grating formation in photovoltaic media such as lithium niobate is presented along with steady-state expressions for the space-charge field in thermal fixing. Thermal fixing by room temperature recording followed by ion compensation at elevated temperatures is compared to simultaneous recording and compensation at high temperature. In particular, the tradeoff between diffraction efficiency and incomplete Bragg matching is evaluated. An experimental investigation of orthogonal phase code multiplexing is described. Two unique capabilities, the ability to perform arithmetic operations on stored data pages optically, rather than electronically, and encrypted data storage, are demonstrated. A comparison of digital signal representations, or channel codes, is carried out. The codes are compared in terms of bit-error rate performance at constant capacity. A well-known one-dimensional digital detection technique, maximum likelihood sequence estimation, is extended for use in a two-dimensional page format memory. The effectiveness of the technique in a system corrupted by intersymbol interference is investigated both experimentally and through numerical simulations. The experimental implementation of a fully-automated multiple page digital holographic storage system is described. Finally, projections of the performance limits of holographic data storage are made taking into account typical noise sources.

  15. DNA barcode goes two-dimensions: DNA QR code web server.

    PubMed

    Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin

    2012-01-01

    The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.

  16. Bilayer Protograph Codes for Half-Duplex Relay Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; VanNguyen, Thuy; Nosratinia, Aria

    2013-01-01

    Direct to Earth return links are limited by the size and power of lander devices. A standard alternative is provided by a two-hops return link: a proximity link (from lander to orbiter relay) and a deep-space link (from orbiter relay to Earth). Although direct to Earth return links are limited by the size and power of lander devices, using an additional link and a proposed coding for relay channels, one can obtain a more reliable signal. Although significant progress has been made in the relay coding problem, existing codes must be painstakingly optimized to match to a single set of channel conditions, many of them do not offer easy encoding, and most of them do not have structured design. A high-performing LDPC (low-density parity-check) code for the relay channel addresses simultaneously two important issues: a code structure that allows low encoding complexity, and a flexible rate-compatible code that allows matching to various channel conditions. Most of the previous high-performance LDPC codes for the relay channel are tightly optimized for a given channel quality, and are not easily adapted without extensive re-optimization for various channel conditions. This code for the relay channel combines structured design and easy encoding with rate compatibility to allow adaptation to the three links involved in the relay channel, and furthermore offers very good performance. The proposed code is constructed by synthesizing a bilayer structure with a pro to graph. In addition to the contribution to relay encoding, an improved family of protograph codes was produced for the point-to-point AWGN (additive white Gaussian noise) channel whose high-rate members enjoy thresholds that are within 0.07 dB of capacity. These LDPC relay codes address three important issues in an integrative manner: low encoding complexity, modular structure allowing for easy design, and rate compatibility so that the code can be easily matched to a variety of channel conditions without extensive re-optimization. The main problem of half-duplex relay coding can be reduced to the simultaneous design of two codes at two rates and two SNRs (signal-to-noise ratios), such that one is a subset of the other. This problem can be addressed by forceful optimization, but a clever method of addressing this problem is via the bilayer lengthened (BL) LDPC structure. This method uses a bilayer Tanner graph to make the two codes while using a concept of "parity forwarding" with subsequent successive decoding that removes the need to directly address the issue of uneven SNRs among the symbols of a given codeword. This method is attractive in that it addresses some of the main issues in the design of relay codes, but it does not by itself give rise to highly structured codes with simple encoding, nor does it give rate-compatible codes. The main contribution of this work is to construct a class of codes that simultaneously possess a bilayer parity- forwarding mechanism, while also benefiting from the properties of protograph codes having an easy encoding, a modular design, and being a rate-compatible code.

  17. Hippocampal Remapping Is Constrained by Sparseness rather than Capacity

    PubMed Central

    Kammerer, Axel; Leibold, Christian

    2014-01-01

    Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570

  18. MetaboAnalystR: an R package for flexible and reproducible analysis of metabolomics data.

    PubMed

    Chong, Jasmine; Xia, Jianguo

    2018-06-28

    The MetaboAnalyst web application has been widely used for metabolomics data analysis and interpretation. Despite its user-friendliness, the web interface has presented its inherent limitations (especially for advanced users) with regard to flexibility in creating customized workflow, support for reproducible analysis, and capacity in dealing with large data. To address these limitations, we have developed a companion R package (MetaboAnalystR) based on the R code base of the web server. The package has been thoroughly tested to ensure that the same R commands will produce identical results from both interfaces. MetaboAnalystR complements the MetaboAnalyst web server to facilitate transparent, flexible and reproducible analysis of metabolomics data. MetaboAnalystR is freely available from https://github.com/xia-lab/MetaboAnalystR. Supplementary data are available at Bioinformatics online.

  19. High-capacity quantum secure direct communication using hyper-entanglement of photonic qubits

    NASA Astrophysics Data System (ADS)

    Cai, Jiarui; Pan, Ziwen; Wang, Tie-Jun; Wang, Sihai; Wang, Chuan

    2016-11-01

    Hyper-entanglement is a system constituted by photons entangled in multiple degrees of freedom (DOF), being considered as a promising way of increasing channel capacity and guaranteeing powerful eavesdropping safeguard. In this work, we propose a coding scheme based on a 3-particle hyper-entanglement of polarization and orbital angular momentum (OAM) system and its application as a quantum secure direct communication (QSDC) protocol. The OAM values are specially encoded by Fibonacci sequence and the polarization carries information by defined unitary operations. The internal relations of the secret message enhances security due to principle of quantum mechanics and Fibonacci sequence. We also discuss the coding capacity and security property along with some simulation results to show its superiority and extensibility.

  20. The ADVANCE Code of Conduct for collaborative vaccine studies.

    PubMed

    Kurz, Xavier; Bauchau, Vincent; Mahy, Patrick; Glismann, Steffen; van der Aa, Lieke Maria; Simondon, François

    2017-04-04

    Lessons learnt from the 2009 (H1N1) flu pandemic highlighted factors limiting the capacity to collect European data on vaccine exposure, safety and effectiveness, including lack of rapid access to available data sources or expertise, difficulties to establish efficient interactions between multiple parties, lack of confidence between private and public sectors, concerns about possible or actual conflicts of interest (or perceptions thereof) and inadequate funding mechanisms. The Innovative Medicines Initiative's Accelerated Development of VAccine benefit-risk Collaboration in Europe (ADVANCE) consortium was established to create an efficient and sustainable infrastructure for rapid and integrated monitoring of post-approval benefit-risk of vaccines, including a code of conduct and governance principles for collaborative studies. The development of the code of conduct was guided by three core and common values (best science, strengthening public health, transparency) and a review of existing guidance and relevant published articles. The ADVANCE Code of Conduct includes 45 recommendations in 10 topics (Scientific integrity, Scientific independence, Transparency, Conflicts of interest, Study protocol, Study report, Publication, Subject privacy, Sharing of study data, Research contract). Each topic includes a definition, a set of recommendations and a list of additional reading. The concept of the study team is introduced as a key component of the ADVANCE Code of Conduct with a core set of roles and responsibilities. It is hoped that adoption of the ADVANCE Code of Conduct by all partners involved in a study will facilitate and speed-up its initiation, design, conduct and reporting. Adoption of the ADVANCE Code of Conduct should be stated in the study protocol, study report and publications and journal editors are encouraged to use it as an indication that good principles of public health, science and transparency were followed throughout the study. Copyright © 2017. Published by Elsevier Ltd.

  1. Combined utilization of partial-response coding and equalization for high-speed WDM-PON with centralized lightwaves.

    PubMed

    Guo, Qi; Tran, An V

    2012-12-17

    In this paper, we investigate the transmission impairments in a high-speed single-feeder wavelength-division-multiplexed passive optical network (WDM-PON) employing low-bandwidth upstream transmitter. A 1-GHz reflective semiconductor optical amplifier (RSOA) is operated at the rates of 10 Gb/s and 20 Gb/s in the proposed WDM-PON. Since the system performance is seriously limited by its uplink in both capacity and reach owing to inter-symbol interference and reflection noise, we present a novel technique with simultaneous capability of spectral efficiency enhancement and transmission distance extension in the uplink via coding and equalization that exploit the principles of partial-response (PR) signal. It is experimentally demonstrated that the proposed system supports the delivery of 10 Gb/s and 20 Gb/s upstream signals over 75-km and 25-km bidirectional fiber, respectively. The configuration of PR equalizer is optimized for its best performance-complexity trade-off. The reflection tolerance of 10 Gb/s and 20 Gb/s channels is improved by 8 dB and 6 dB, respectively, with PR coding. The proposed cost-effective signal processing scheme has great potential for the next-generation access networks.

  2. LRFD software for design and actual ultimate capacity of confined rectangular columns.

    DOT National Transportation Integrated Search

    2013-04-01

    The analysis of concrete columns using unconfined concrete models is a well established practice. On the : other hand, prediction of the actual ultimate capacity of confined concrete columns requires specialized nonlinear : analysis. Modern codes and...

  3. A co-designed equalization, modulation, and coding scheme

    NASA Technical Reports Server (NTRS)

    Peile, Robert E.

    1992-01-01

    The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels.

  4. Short- and long-term memory contributions to immediate serial recognition: evidence from serial position effects.

    PubMed

    Purser, Harry; Jarrold, Christopher

    2010-04-01

    A long-standing body of research supports the existence of separable short- and long-term memory systems, relying on phonological and semantic codes, respectively. The aim of the current study was to measure the contribution of long-term knowledge to short-term memory performance by looking for evidence of phonologically and semantically coded storage within a short-term recognition task, among developmental samples. Each experimental trial presented 4-item lists. In Experiment 1 typically developing children aged 5 to 6 years old showed evidence of phonologically coded storage across all 4 serial positions, but evidence of semantically coded storage at Serial Positions 1 and 2. In a further experiment, a group of individuals with Down syndrome was investigated as a test case that might be expected to use semantic coding to support short-term storage, but these participants showed no evidence of semantically coded storage and evidenced phonologically coded storage only at Serial Position 4, suggesting that individuals with Down syndrome have a verbal short-term memory capacity of 1 item. Our results suggest that previous evidence of semantic effects on "short-term memory performance" does not reflect semantic coding in short-term memory itself, and provide an experimental method for researchers wishing to take a relatively pure measure of verbal short-term memory capacity, in cases where rehearsal is unlikely.

  5. Kinetic turbulence simulations at extreme scale on leadership-class systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Bei; Ethier, Stephane; Tang, William

    2013-01-01

    Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less

  6. Network adaptation improves temporal representation of naturalistic stimuli in Drosophila eye: I dynamics.

    PubMed

    Zheng, Lei; Nikolaev, Anton; Wardill, Trevor J; O'Kane, Cahir J; de Polavieja, Gonzalo G; Juusola, Mikko

    2009-01-01

    Because of the limited processing capacity of eyes, retinal networks must adapt constantly to best present the ever changing visual world to the brain. However, we still know little about how adaptation in retinal networks shapes neural encoding of changing information. To study this question, we recorded voltage responses from photoreceptors (R1-R6) and their output neurons (LMCs) in the Drosophila eye to repeated patterns of contrast values, collected from natural scenes. By analyzing the continuous photoreceptor-to-LMC transformations of these graded-potential neurons, we show that the efficiency of coding is dynamically improved by adaptation. In particular, adaptation enhances both the frequency and amplitude distribution of LMC output by improving sensitivity to under-represented signals within seconds. Moreover, the signal-to-noise ratio of LMC output increases in the same time scale. We suggest that these coding properties can be used to study network adaptation using the genetic tools in Drosophila, as shown in a companion paper (Part II).

  7. Network Adaptation Improves Temporal Representation of Naturalistic Stimuli in Drosophila Eye: I Dynamics

    PubMed Central

    Wardill, Trevor J.; O'Kane, Cahir J.; de Polavieja, Gonzalo G.; Juusola, Mikko

    2009-01-01

    Because of the limited processing capacity of eyes, retinal networks must adapt constantly to best present the ever changing visual world to the brain. However, we still know little about how adaptation in retinal networks shapes neural encoding of changing information. To study this question, we recorded voltage responses from photoreceptors (R1–R6) and their output neurons (LMCs) in the Drosophila eye to repeated patterns of contrast values, collected from natural scenes. By analyzing the continuous photoreceptor-to-LMC transformations of these graded-potential neurons, we show that the efficiency of coding is dynamically improved by adaptation. In particular, adaptation enhances both the frequency and amplitude distribution of LMC output by improving sensitivity to under-represented signals within seconds. Moreover, the signal-to-noise ratio of LMC output increases in the same time scale. We suggest that these coding properties can be used to study network adaptation using the genetic tools in Drosophila, as shown in a companion paper (Part II). PMID:19180196

  8. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  9. Implementing Subduction Models in the New Mantle Convection Code Aspect

    NASA Astrophysics Data System (ADS)

    Arredondo, Katrina; Billen, Magali

    2014-05-01

    The geodynamic community has utilized various numerical modeling codes as scientific questions arise and computer processing power increases. Citcom, a widely used mantle convection code, has limitations and vulnerabilities such as temperature overshoots of hundreds or thousands degrees Kelvin (i.e., Kommu et al., 2013). Recently Aspect intended as a more powerful cousin, is in active development with additions such as Adaptable Mesh Refinement (AMR) and improved solvers (Kronbichler et al., 2012). The validity and ease of use of Aspect is important to its survival and role as a possible upgrade and replacement to Citcom. Development of publishable models illustrates the capacity of Aspect. We present work on the addition of non-linear solvers and stress-dependent rheology to Aspect. With a solid foundational knowledge of C++, these additions were easily added into Aspect and tested against CitcomS. Time-dependent subduction models akin to those in Billen and Hirth (2007) are built and compared in CitcomS and Aspect. Comparison with CitcomS assists in Aspect development and showcases its flexibility, usability and capabilities. References: Billen, M. I., and G. Hirth, 2007. Rheologic controls on slab dynamics. Geochemistry, Geophysics, Geosystems. Kommu, R., E. Heien, L. H. Kellogg, W. Bangerth, T. Heister, E. Studley, 2013. The Overshoot Phenomenon in Geodynamics Codes. American Geophysical Union Fall Meeting. M. Kronbichler, T. Heister, W. Bangerth, 2012, High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophys. J. Int.

  10. Impaired letter-string processing in developmental dyslexia: what visual-to-phonology code mapping disorder?

    PubMed

    Valdois, Sylviane; Lassus-Sangosse, Delphine; Lobier, Muriel

    2012-05-01

    Poor parallel letter-string processing in developmental dyslexia was taken as evidence of poor visual attention (VA) span, that is, a limitation of visual attentional resources that affects multi-character processing. However, the use of letter stimuli in oral report tasks was challenged on its capacity to highlight a VA span disorder. In particular, report of poor letter/digit-string processing but preserved symbol-string processing was viewed as evidence of poor visual-to-phonology code mapping, in line with the phonological theory of developmental dyslexia. We assessed here the visual-to-phonological-code mapping disorder hypothesis. In Experiment 1, letter-string, digit-string and colour-string processing was assessed to disentangle a phonological versus visual familiarity account of the letter/digit versus symbol dissociation. Against a visual-to-phonological-code mapping disorder but in support of a familiarity account, results showed poor letter/digit-string processing but preserved colour-string processing in dyslexic children. In Experiment 2, two tasks of letter-string report were used, one of which was performed simultaneously to a high-taxing phonological task. Results show that dyslexic children are similarly impaired in letter-string report whether a concurrent phonological task is simultaneously performed or not. Taken together, these results provide strong evidence against a phonological account of poor letter-string processing in developmental dyslexia. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Independent evolution of genomic characters during major metazoan transitions.

    PubMed

    Simakov, Oleg; Kawashima, Takeshi

    2017-07-15

    Metazoan evolution encompasses a vast evolutionary time scale spanning over 600 million years. Our ability to infer ancestral metazoan characters, both morphological and functional, is limited by our understanding of the nature and evolutionary dynamics of the underlying regulatory networks. Increasing coverage of metazoan genomes enables us to identify the evolutionary changes of the relevant genomic characters such as the loss or gain of coding sequences, gene duplications, micro- and macro-synteny, and non-coding element evolution in different lineages. In this review we describe recent advances in our understanding of ancestral metazoan coding and non-coding features, as deduced from genomic comparisons. Some genomic changes such as innovations in gene and linkage content occur at different rates across metazoan clades, suggesting some level of independence among genomic characters. While their contribution to biological innovation remains largely unclear, we review recent literature about certain genomic changes that do correlate with changes to specific developmental pathways and metazoan innovations. In particular, we discuss the origins of the recently described pharyngeal cluster which is conserved across deuterostome genomes, and highlight different genomic features that have contributed to the evolution of this group. We also assess our current capacity to infer ancestral metazoan states from gene models and comparative genomics tools and elaborate on the future directions of metazoan comparative genomics relevant to evo-devo studies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow.

    PubMed

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time.

  13. Improved Secret Image Sharing Scheme in Embedding Capacity without Underflow and Overflow

    PubMed Central

    Pang, Liaojun; Miao, Deyu; Li, Huixian; Wang, Qiong

    2015-01-01

    Computational secret image sharing (CSIS) is an effective way to protect a secret image during its transmission and storage, and thus it has attracted lots of attentions since its appearance. Nowadays, it has become a hot topic for researchers to improve the embedding capacity and eliminate the underflow and overflow situations, which is embarrassing and difficult to deal with. The scheme, which has the highest embedding capacity among the existing schemes, has the underflow and overflow problems. Although the underflow and overflow situations have been well dealt with by different methods, the embedding capacities of these methods are reduced more or less. Motivated by these concerns, we propose a novel scheme, in which we take the differential coding, Huffman coding, and data converting to compress the secret image before embedding it to further improve the embedding capacity, and the pixel mapping matrix embedding method with a newly designed matrix is used to embed secret image data into the cover image to avoid the underflow and overflow situations. Experiment results show that our scheme can improve the embedding capacity further and eliminate the underflow and overflow situations at the same time. PMID:26351657

  14. DNA Barcode Goes Two-Dimensions: DNA QR Code Web Server

    PubMed Central

    Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin

    2012-01-01

    The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, “DNA barcode” actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications. PMID:22574113

  15. Exploring nutrition capacity in Australia's charitable food sector.

    PubMed

    Wingrove, Kate; Barbour, Liza; Palermo, Claire

    2017-11-01

    The primary aim of this study was to explore the capacity of community organisations within Australia's charitable food sector to provide nutritious food to people experiencing food insecurity. A secondary aim was to explore their capacity to provide food in an environment that encourages social interaction. This qualitative research used an exploratory case study design and was informed by a nutrition capacity framework. Participants were recruited through SecondBite, a not-for-profit food rescue organisation in Australia. Convenience sampling methods were used. Semi-structured interviews were conducted to explore the knowledge, attitudes and experiences of people actively involved in emergency food relief provision. Transcripts were thematically analysed using an open coding technique. Nine interviews were conducted. The majority of participants were female (n = 7, 77.8%) and worked or volunteered at organisations within Victoria (n = 7, 77.8%). Results suggest that the capacity for community organisations to provide nutritious food to their clients may be limited by resource availability more so than the nutrition-related knowledge and attitudes of staff members and volunteers. Australia's charitable food sector plays a vital role in addressing the short-term needs of people experiencing food insecurity. To ensure the food provided to people experiencing food insecurity is nutritious and provided in an environment that encourages social interaction, it appears that the charitable food sector requires additional resources. In order to reduce demand for emergency food relief, an integrated policy approach targeting the underlying determinants of food insecurity may be needed. © 2016 Dietitians Association of Australia.

  16. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  17. Non-coding, mRNA-like RNAs database Y2K.

    PubMed

    Erdmann, V A; Szymanski, M; Hochberg, A; Groot, N; Barciszewski, J

    2000-01-01

    In last few years much data has accumulated on various non-translatable RNA transcripts that are synthesised in different cells. They are lacking in protein coding capacity and it seems that they work mainly or exclusively at the RNA level. All known non-coding RNA transcripts are collected in the database: http://www. man.poznan.pl/5SData/ncRNA/index.html

  18. Non-coding, mRNA-like RNAs database Y2K

    PubMed Central

    Erdmann, Volker A.; Szymanski, Maciej; Hochberg, Abraham; Groot, Nathan de; Barciszewski, Jan

    2000-01-01

    In last few years much data has accumulated on various non-translatable RNA transcripts that are synthesised in different cells. They are lacking in protein coding capacity and it seems that they work mainly or exclusively at the RNA level. All known non-coding RNA transcripts are collected in the database: http://www.man.poznan.pl/5SData/ncRNA/index.html PMID:10592224

  19. Genomic analysis of organismal complexity in the multicellular green alga Volvox carteri

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prochnik, Simon E.; Umen, James; Nedelcu, Aurora

    2010-07-01

    Analysis of the Volvox carteri genome reveals that this green alga's increased organismal complexity and multicellularity are associated with modifications in protein families shared with its unicellular ancestor, and not with large-scale innovations in protein coding capacity. The multicellular green alga Volvox carteri and its morphologically diverse close relatives (the volvocine algae) are uniquely suited for investigating the evolution of multicellularity and development. We sequenced the 138 Mb genome of V. carteri and compared its {approx}14,500 predicted proteins to those of its unicellular relative, Chlamydomonas reinhardtii. Despite fundamental differences in organismal complexity and life history, the two species have similarmore » protein-coding potentials, and few species-specific protein-coding gene predictions. Interestingly, volvocine algal-specific proteins are enriched in Volvox, including those associated with an expanded and highly compartmentalized extracellular matrix. Our analysis shows that increases in organismal complexity can be associated with modifications of lineage-specific proteins rather than large-scale invention of protein-coding capacity.« less

  20. Limited static and dynamic delivering capacity allocations in scale-free networks

    NASA Astrophysics Data System (ADS)

    Haddou, N. Ben; Ez-Zahraouy, H.; Rachadi, A.

    In traffic networks, it is quite important to assign proper packet delivering capacities to the routers with minimum cost. In this respect, many allocation models based on static and dynamic properties have been proposed. In this paper, we are interested in the impact of limiting the packet delivering capacities already allocated to the routers; each node is assigned a packet delivering capacity limited by the maximal capacity Cmax of the routers. To study the limitation effect, we use two basic delivering capacity allocation models; static delivering capacity allocation (SDCA) and dynamic delivering capacity allocation (DDCA). In the SDCA, the capacity allocated is proportional to the node degree, and for DDCA, it is proportional to its queue length. We have studied and compared the limitation of both allocation models under the shortest path (SP) routing strategy as well as the efficient path (EP) routing protocol. In the SP case, we noted a similarity in the results; the network capacity increases with increasing Cmax. For the EP scheme, the network capacity stops increasing for relatively small packet delivering capability limit Cmax for both allocation strategies. However, it reaches high values under the limited DDCA before the saturation. We also find that in the DDCA case, the network capacity remains constant when the traffic information available to each router was updated after long period times τ.

  1. Protograph-Based Raptor-Like Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Chen, Tsung-Yi; Wang, Jiadong; Wesel, Richard D.

    2014-01-01

    Theoretical analysis has long indicated that feedback improves the error exponent but not the capacity of pointto- point memoryless channels. The analytic and empirical results indicate that at short blocklength regime, practical rate-compatible punctured convolutional (RCPC) codes achieve low latency with the use of noiseless feedback. In 3GPP, standard rate-compatible turbo codes (RCPT) did not outperform the convolutional codes in the short blocklength regime. The reason is the convolutional codes for low number of states can be decoded optimally using Viterbi decoder. Despite excellent performance of convolutional codes at very short blocklengths, the strength of convolutional codes does not scale with the blocklength for a fixed number of states in its trellis.

  2. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  3. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  4. Shared filtering processes link attentional and visual short-term memory capacity limits.

    PubMed

    Bettencourt, Katherine C; Michalka, Samantha W; Somers, David C

    2011-09-30

    Both visual attention and visual short-term memory (VSTM) have been shown to have capacity limits of 4 ± 1 objects, driving the hypothesis that they share a visual processing buffer. However, these capacity limitations also show strong individual differences, making the degree to which these capacities are related unclear. Moreover, other research has suggested a distinction between attention and VSTM buffers. To explore the degree to which capacity limitations reflect the use of a shared visual processing buffer, we compared individual subject's capacities on attentional and VSTM tasks completed in the same testing session. We used a multiple object tracking (MOT) and a VSTM change detection task, with varying levels of distractors, to measure capacity. Significant correlations in capacity were not observed between the MOT and VSTM tasks when distractor filtering demands differed between the tasks. Instead, significant correlations were seen when the tasks shared spatial filtering demands. Moreover, these filtering demands impacted capacity similarly in both attention and VSTM tasks. These observations fail to support the view that visual attention and VSTM capacity limits result from a shared buffer but instead highlight the role of the resource demands of underlying processes in limiting capacity.

  5. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  6. Geotechnical LFRD calculations of settlement and bearing capacity of GDOT shallow bridge foundations and retaining walls.

    DOT National Transportation Integrated Search

    2016-08-09

    The AASHTO codes for Load Resistance Factored Design (LRFD) regarding shallow bridge foundations : and walls have been implemented into a set of spreadsheet algorithms to facilitate the calculations of bearing : capacity and footing settlements on na...

  7. Interference and memory capacity limitations.

    PubMed

    Endress, Ansgar D; Szabó, Szilárd

    2017-10-01

    Working memory (WM) is thought to have a fixed and limited capacity. However, the origins of these capacity limitations are debated, and generally attributed to active, attentional processes. Here, we show that the existence of interference among items in memory mathematically guarantees fixed and limited capacity limits under very general conditions, irrespective of any processing assumptions. Assuming that interference (a) increases with the number of interfering items and (b) brings memory performance to chance levels for large numbers of interfering items, capacity limits are a simple function of the relative influence of memorization and interference. In contrast, we show that time-based memory limitations do not lead to fixed memory capacity limitations that are independent of the timing properties of an experiment. We show that interference can mimic both slot-like and continuous resource-like memory limitations, suggesting that these types of memory performance might not be as different as commonly believed. We speculate that slot-like WM limitations might arise from crowding-like phenomena in memory when participants have to retrieve items. Further, based on earlier research on parallel attention and enumeration, we suggest that crowding-like phenomena might be a common reason for the 3 major cognitive capacity limitations. As suggested by Miller (1956) and Cowan (2001), these capacity limitations might arise because of a common reason, even though they likely rely on distinct processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Three-dimensional quick response code based on inkjet printing of upconversion fluorescent nanoparticles for drug anti-counterfeiting

    NASA Astrophysics Data System (ADS)

    You, Minli; Lin, Min; Wang, Shurui; Wang, Xuemin; Zhang, Ge; Hong, Yuan; Dong, Yuqing; Jin, Guorui; Xu, Feng

    2016-05-01

    Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting.Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting. Electronic supplementary information (ESI) available: Calculating details of UCNP content per 3D QR code and decoding process of the 3D QR code. See DOI: 10.1039/c6nr01353h

  9. Family medicine practice performance and knowledge management.

    PubMed

    Orzano, A John; McInerney, Claire R; Tallia, Alfred F; Scharf, Davida; Crabtree, Benjamin F

    2008-01-01

    Knowledge management (KM) is the process by which people in organizations find, share, and develop knowledge for action. KM affects performance by influencing work relationships to enhance learning and decision making. To identify how family medicine practices exhibit KM. A model and a template of KM concepts were derived from a comprehensive organizational literature review. Two higher and two lower performing family medicine practices were purposefully selected from existing comparative case studies based on prevention delivery rates and innovation. Interviews, fieldnotes of operations, and clinical encounters were coded independently using the template. Face-to-face discussions resolved coding differences. All practices had processes and tools for finding, sharing, and developing knowledge; however, KM overall was limited despite implementation of expensive technologies like an electronic medical record. Where present, KM processes and tools were used by individuals but not integrated throughout the organization. Loss of information was prominent, and finding knowledge was underdeveloped. The use of technical tools and developing knowledge by reconfiguration and measurement were particularly limited. Socially related tools, such as face-to-face-communication for sharing and developing knowledge, were more developed. As in other organizations, tool use was tailored for specific outcomes and leveraged by other organizational capacities. Differences in KM occur within family practices and between family practices and other organizations and may have implications for improving practice performance. Understanding interaction patterns of work relationships and KM may explain why costly technical or externally imposed "one size fits all" practice organizational interventions have had mixed results and limited sustainability.

  10. Perspectives on the Future of CFD

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan

    2000-01-01

    This viewgraph presentation gives an overview of the future of computational fluid dynamics (CFD), which in the past has pioneered the field of flow simulation. Over time CFD has progressed as computing power. Numerical methods have been advanced as CPU and memory capacity increases. Complex configurations are routinely computed now and direct numerical simulations (DNS) and large eddy simulations (LES) are used to study turbulence. As the computing resources changed to parallel and distributed platforms, computer science aspects such as scalability (algorithmic and implementation) and portability and transparent codings have advanced. Examples of potential future (or current) challenges include risk assessment, limitations of the heuristic model, and the development of CFD and information technology (IT) tools.

  11. The application of LDPC code in MIMO-OFDM system

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  12. Multitasking vs. multiplexing: Toward a normative account of limitations in the simultaneous execution of control-demanding behaviors

    PubMed Central

    Feng, S. F.; Schwemmer, M.; Gershman, S. J.; Cohen, J. D.

    2014-01-01

    Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few — and usually only a single — control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing, it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a tradeoff between the flexibility and efficiency of representation (“multiplexing”) and the simultaneous engagement of different processing pathways (“multitasking”). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes. PMID:24481850

  13. Multitasking versus multiplexing: Toward a normative account of limitations in the simultaneous execution of control-demanding behaviors.

    PubMed

    Feng, S F; Schwemmer, M; Gershman, S J; Cohen, J D

    2014-03-01

    Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few-and usually only a single-control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing; it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a trade-off between the flexibility and efficiency of representation ("multiplexing") and the simultaneous engagement of different processing pathways ("multitasking"). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes.

  14. Design for minimum energy in interstellar communication

    NASA Astrophysics Data System (ADS)

    Messerschmitt, David G.

    2015-02-01

    Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.

  15. 78 FR 79363 - Hazardous Materials: Adoption of ASME Code Section XII and the National Board Inspection Code

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ... Rulemaking Division, (202) 366-8553, or Stanley Staniszewski, Engineering and Research [[Page 79364... increased capacity to transport product. A review of previous research by PHMSA's Engineering and Research..., knowledge-sharing, and skill development across all engineering disciplines. ASME is recognized globally for...

  16. Quantum and Private Capacities of Low-Noise Channels

    NASA Astrophysics Data System (ADS)

    Leditzky, Felix; Leung, Debbie; Smith, Graeme

    2018-04-01

    We determine both the quantum and the private capacities of low-noise quantum channels to leading orders in the channel's distance to the perfect channel. It has been an open problem for more than 20 yr to determine the capacities of some of these low-noise channels such as the depolarizing channel. We also show that both capacities are equal to the single-letter coherent information of the channel, again to leading orders. We thus find that, in the low-noise regime, superadditivity and degenerate codes have a negligible benefit for the quantum capacity, and shielding does not improve the private capacity beyond the quantum capacity, in stark contrast to the situation when noisier channels are considered.

  17. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code ismore » a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.« less

  18. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-08

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  19. Impact of limited solvent capacity on metabolic rate, enzyme activities, and metabolite concentrations of S. cerevisiae glycolysis.

    PubMed

    Vazquez, Alexei; de Menezes, Marcio A; Barabási, Albert-László; Oltvai, Zoltan N

    2008-10-01

    The cell's cytoplasm is crowded by its various molecular components, resulting in a limited solvent capacity for the allocation of new proteins, thus constraining various cellular processes such as metabolism. Here we study the impact of the limited solvent capacity constraint on the metabolic rate, enzyme activities, and metabolite concentrations using a computational model of Saccharomyces cerevisiae glycolysis as a case study. We show that given the limited solvent capacity constraint, the optimal enzyme activities and the metabolite concentrations necessary to achieve a maximum rate of glycolysis are in agreement with their experimentally measured values. Furthermore, the predicted maximum glycolytic rate determined by the solvent capacity constraint is close to that measured in vivo. These results indicate that the limited solvent capacity is a relevant constraint acting on S. cerevisiae at physiological growth conditions, and that a full kinetic model together with the limited solvent capacity constraint can be used to predict both metabolite concentrations and enzyme activities in vivo.

  20. Impact of Limited Solvent Capacity on Metabolic Rate, Enzyme Activities, and Metabolite Concentrations of S. cerevisiae Glycolysis

    PubMed Central

    Vazquez, Alexei; de Menezes, Marcio A.; Barabási, Albert-László; Oltvai, Zoltan N.

    2008-01-01

    The cell's cytoplasm is crowded by its various molecular components, resulting in a limited solvent capacity for the allocation of new proteins, thus constraining various cellular processes such as metabolism. Here we study the impact of the limited solvent capacity constraint on the metabolic rate, enzyme activities, and metabolite concentrations using a computational model of Saccharomyces cerevisiae glycolysis as a case study. We show that given the limited solvent capacity constraint, the optimal enzyme activities and the metabolite concentrations necessary to achieve a maximum rate of glycolysis are in agreement with their experimentally measured values. Furthermore, the predicted maximum glycolytic rate determined by the solvent capacity constraint is close to that measured in vivo. These results indicate that the limited solvent capacity is a relevant constraint acting on S. cerevisiae at physiological growth conditions, and that a full kinetic model together with the limited solvent capacity constraint can be used to predict both metabolite concentrations and enzyme activities in vivo. PMID:18846199

  1. High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin

    2016-01-01

    Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.

  2. Disjointness of Stabilizer Codes and Limitations on Fault-Tolerant Logical Gates

    NASA Astrophysics Data System (ADS)

    Jochym-O'Connor, Tomas; Kubica, Aleksander; Yoder, Theodore J.

    2018-04-01

    Stabilizer codes are among the most successful quantum error-correcting codes, yet they have important limitations on their ability to fault tolerantly compute. Here, we introduce a new quantity, the disjointness of the stabilizer code, which, roughly speaking, is the number of mostly nonoverlapping representations of any given nontrivial logical Pauli operator. The notion of disjointness proves useful in limiting transversal gates on any error-detecting stabilizer code to a finite level of the Clifford hierarchy. For code families, we can similarly restrict logical operators implemented by constant-depth circuits. For instance, we show that it is impossible, with a constant-depth but possibly geometrically nonlocal circuit, to implement a logical non-Clifford gate on the standard two-dimensional surface code.

  3. Benefit of adaptive FEC in shared backup path protected elastic optical network.

    PubMed

    Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang

    2015-07-27

    We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.

  4. Essential Properties of Language, or, Why Language Is Not a Code

    ERIC Educational Resources Information Center

    Kravchenko, Alexander V.

    2007-01-01

    Despite a strong tradition of viewing "coded equivalence" as the underlying principle of linguistic semiotics, it lacks the power needed to understand and explain language as an empirical phenomenon characterized by complex dynamics. Applying the biology of cognition to the nature of the human cognitive/linguistic capacity as rooted in the…

  5. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  6. Recalculation with SEACAB of the activation by spent fuel neutrons and residual dose originated in the racks replaced at Cofrentes NPP

    NASA Astrophysics Data System (ADS)

    Ortego, Pedro; Rodriguez, Alain; Töre, Candan; Compadre, José Luis de Diego; Quesada, Baltasar Rodriguez; Moreno, Raul Orive

    2017-09-01

    In order to increase the storage capacity of the East Spent Fuel Pool at the Cofrentes NPP, located in Valencia province, Spain, the existing storage stainless steel racks were replaced by a new design of compact borated stainless steel racks allowing a 65% increase in fuel storing capacity. Calculation of the activation of the used racks was successfully performed with the use of MCNP4B code. Additionally the dose rate at contact with a row of racks in standing position and behind a wall of shielding material has been calculated using MCNP4B code as well. These results allowed a preliminary definition of the burnker required for the storage of racks. Recently the activity in the racks has been recalculated with SEACAB system which combines the mesh tally of MCNP codes with the activation code ACAB, applying the rigorous two-step method (R2S) developed at home, benchmarked with FNG irradiation experiments and usually applied in fusion calculations for ITER project.

  7. Drug-laden 3D biodegradable label using QR code for anti-counterfeiting of drugs.

    PubMed

    Fei, Jie; Liu, Ran

    2016-06-01

    Wiping out counterfeit drugs is a great task for public health care around the world. The boost of these drugs makes treatment to become potentially harmful or even lethal. In this paper, biodegradable drug-laden QR code label for anti-counterfeiting of drugs is proposed that can provide the non-fluorescence recognition and high capacity. It is fabricated by the laser cutting to achieve the roughness over different surface which causes the difference in the gray levels on the translucent material the QR code pattern, and the micro mold process to obtain the drug-laden biodegradable label. We screened biomaterials presenting the relevant conditions and further requirements of the package. The drug-laden microlabel is on the surface of the troches or the bottom of the capsule and can be read by a simple smartphone QR code reader application. Labeling the pill directly and decoding the information successfully means more convenient and simple operation with non-fluorescence and high capacity in contrast to the traditional methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Experimental Investigations on Axially and Eccentrically Loaded Masonry Walls

    NASA Astrophysics Data System (ADS)

    Keshava, Mangala; Raghunath, Seshagiri Rao

    2017-12-01

    In India, un-reinforced masonry walls are often used as main structural components in load bearing structures. Indian code on masonry accounts the reduction in strength of walls by using stress reduction factors in its design philosophy. This code was introduced in 1987 and reaffirmed in 1995. The present study investigates the use of these factors for south Indian masonry. Also, with the gaining popularity in block work construction, the aim of this study was to find out the suitability of these factors given in the Indian code to block work masonry. Normally, the load carrying capacity of masonry walls can be assessed in three ways, namely, (1) tests on masonry constituents, (2) tests on masonry prisms and (3) tests on full-scale wall specimens. Tests on bricks/blocks, cement-sand mortar, brick/block masonry prisms and 14 full-scale brick/block masonry walls formed the experimental investigation. The behavior of the walls was investigated under varying slenderness and eccentricity ratios. Hollow concrete blocks normally used as in-fill masonry can be considered as load bearing elements as its load carrying capacity was found to be high when compared to conventional brick masonry. Higher slenderness and eccentricity ratios drastically reduced the strength capacity of south Indian brick masonry walls. The reduction in strength due to slenderness and eccentricity is presented in the form of stress reduction factors in the Indian code. These factors obtained through experiments on eccentrically loaded brick masonry walls was lower while that of brick/block masonry under axial loads was higher than the values indicated in the Indian code. Also the reduction in strength is different for brick and block work masonry thus indicating the need for separate stress reduction factors for these two masonry materials.

  9. The effect of working memory capacity limitations on the intuitive assessment of correlation: amplification, attenuation, or both?

    PubMed

    Cahan, Sorel; Mor, Yaniv

    2007-03-01

    This article challenges Yaakov Kareev's (1995a, 2000) argument regarding the positive bias of intuitive correlation estimates due to working memory capacity limitations and its adaptive value. The authors show that, under narrow window theory's primacy effect assumption, there is a considerable between-individual variability of the effects of capacity limitations on the intuitive assessment of correlation, in terms of both sign and magnitude: Limited capacity acts as an amplifier for some individuals and as a silencer for others. Furthermore, the average amount of attenuation exceeds the average amount of amplification, and the more so, the smaller the capacity. Implications regarding the applicability and contribution of the bias notion in this context and the evaluation of the adaptive value of capacity limitations are discussed.

  10. Combined electric and acoustic hearing performance with Zebra® speech processor: speech reception, place, and temporal coding evaluation.

    PubMed

    Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J

    2013-06-01

    To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.

  11. Emergency and urgent care capacity in a resource-limited setting: an assessment of health facilities in western Kenya

    PubMed Central

    Burke, Thomas F; Hines, Rosemary; Ahn, Roy; Walters, Michelle; Young, David; Anderson, Rachel Eleanor; Tom, Sabrina M; Clark, Rachel; Obita, Walter; Nelson, Brett D

    2014-01-01

    Objective Injuries, trauma and non-communicable diseases are responsible for a rising proportion of death and disability in low-income and middle-income countries. Delivering effective emergency and urgent healthcare for these and other conditions in resource-limited settings is challenging. In this study, we sought to examine and characterise emergency and urgent care capacity in a resource-limited setting. Methods We conducted an assessment within all 30 primary and secondary hospitals and within a stratified random sampling of 30 dispensaries and health centres in western Kenya. The key informants were the most senior facility healthcare provider and manager available. Emergency physician researchers utilised a semistructured assessment tool, and data were analysed using descriptive statistics and thematic coding. Results No lower level facilities and 30% of higher level facilities reported having a defined, organised approach to trauma. 43% of higher level facilities had access to an anaesthetist. The majority of lower level facilities had suture and wound care supplies and gloves but typically lacked other basic trauma supplies. For cardiac care, 50% of higher level facilities had morphine, but a minority had functioning ECG, sublingual nitroglycerine or a defibrillator. Only 20% of lower level facilities had glucometers, and only 33% of higher level facilities could care for diabetic emergencies. No facilities had sepsis clinical guidelines. Conclusions Large gaps in essential emergency care capabilities were identified at all facility levels in western Kenya. There are great opportunities for a universally deployed basic emergency care package, an advanced emergency care package and facility designation scheme, and a reliable prehospital care transportation and communications system in resource-limited settings. PMID:25260371

  12. 77 FR 35667 - Commission Information Collection Activities (FERC-567); Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-14

    ... Act of 1995, 44 United States Code (U.S.C.) 3507(a)(1)(D), the Federal Energy Regulatory Commission... Reports of System Flow Diagrams and System Capacity to the Office of Management and Budget (OMB) for... System Flow Diagrams and System Capacity. OMB Control No.: 1902-0005. Type of Request: Three-year...

  13. Energy-efficient spatial-domain-based hybrid multidimensional coded-modulations enabling multi-Tb/s optical transport.

    PubMed

    Djordjevic, Ivan B

    2011-08-15

    In addition to capacity, the future high-speed optical transport networks will also be constrained by energy consumption. In order to solve the capacity and energy constraints simultaneously, in this paper we propose the use of energy-efficient hybrid D-dimensional signaling (D>4) by employing all available degrees of freedom for conveyance of the information over a single carrier including amplitude, phase, polarization and orbital angular momentum (OAM). Given the fact that the OAM eigenstates, associated with the azimuthal phase dependence of the complex electric field, are orthogonal, they can be used as basis functions for multidimensional signaling. Since the information capacity is a linear function of number of dimensions, through D-dimensional signal constellations we can significantly improve the overall optical channel capacity. The energy-efficiency problem is solved, in this paper, by properly designing the D-dimensional signal constellation such that the mutual information is maximized, while taking the energy constraint into account. We demonstrate high-potential of proposed energy-efficient hybrid D-dimensional coded-modulation scheme by Monte Carlo simulations. © 2011 Optical Society of America

  14. Understanding the investigators: a qualitative study investigating the barriers and enablers to the implementation of local investigator-initiated clinical trials in Ethiopia

    PubMed Central

    Franzen, Samuel R P; Chandler, Clare; Enquselassie, Fikre; Siribaddana, Sisira; Atashili, Julius; Angus, Brian; Lang, Trudie

    2013-01-01

    Objectives Clinical trials provide ‘gold standard’ evidence for policy, but insufficient locally relevant trials are conducted in low-income and middle-income countries. Local investigator-initiated trials could generate highly relevant data for national governments, but information is lacking on how to facilitate them. We aimed to identify barriers and enablers to investigator-initiated trials in Ethiopia to inform and direct capacity strengthening initiatives. Design Exploratory, qualitative study comprising of in-depth interviews (n=7) and focus group discussions (n=3). Setting Fieldwork took place in Ethiopia during March 2011. Participants Local health researchers with previous experiences of clinical trials or stakeholders with an interest in trials were recruited through snowball sampling (n=20). Outcome measures Detailed discussion notes were analysed using thematic coding analysis and key themes were identified. Results All participants perceived investigator-initiated trials as important for generating local evidence. System and organisational barriers included: limited funding allocation, weak regulatory and administrative systems, few learning opportunities, limited human and material capacity and poor incentives for conducting research. Operational hurdles were symptomatic of these barriers. Lack of awareness, confidence and motivation to undertake trials were important individual barriers. Training, knowledge sharing and experience exchange were key enablers to trial conduct and collaboration was unanimously regarded as important for improving capacity. Conclusions Barriers to trial conduct were found at individual, operational, organisational and system levels. These findings indicate that to increase locally led trial conduct in Ethiopia, system wide changes are needed to create a more receptive and enabling research environment. Crucially, the creation of research networks between potential trial groups could provide much needed practical collaborative support through sharing of financial and project management burdens, knowledge and resources. These findings could have important implications for capacity-strengthening initiatives but further research is needed before the results can be generalised more widely. PMID:24285629

  15. Performance enhancement of optical code-division multiple-access systems using transposed modified Walsh code

    NASA Astrophysics Data System (ADS)

    Sikder, Somali; Ghosh, Shila

    2018-02-01

    This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.

  16. Community-based research in action: tales from the Ktunaxa community learning centres project.

    PubMed

    Stacy, Elizabeth; Wisener, Katherine; Liman, Yolanda; Beznosova, Olga; Lauscher, Helen Novak; Ho, Kendall; Jarvis-Selinger, Sandra

    2014-01-01

    Rural communities, particularly Aboriginal communities, often have limited access to health information, a situation that can have significant negative consequences. To address the lack of culturally and geographically relevant health information, a community-university partnership was formed to develop, implement, and evaluate Aboriginal Community Learning Centres (CLCs). The objective of this paper is to evaluate the community-based research process used in the development of the CLCs. It focuses on the process of building relationships among partners and the CLC's value and sustainability. Semistructured interviews were conducted with key stakeholders, including principal investigators, community research leads, and supervisors. The interview transcripts were analyzed using an open-coding process to identify themes. Key challenges included enacting shared project governance, negotiating different working styles, and hiring practices based on commitment to project objectives rather than skill set. Technological access provided by the CLCs increased capacity for learning and collective community initiatives, as well as building community leads' skills, knowledge, and self-efficacy. An important lesson was to meet all partners "where they are" in building trusting relationships and adapting research methods to fit the project's context and strengths. Successful results were dependent upon persistence and patience in working through differences, and breaking the project into achievable goals, which collectively contributed to trust and capacity building. The process of building these partnerships resulted in increased capacity of communities to facilitate learning and change initiatives, and the capacity of the university to engage in successful research partnerships with Aboriginal communities in the future.

  17. Addressing the threat to biodiversity from botanic gardens.

    PubMed

    Hulme, Philip E

    2011-04-01

    Increasing evidence highlights the role that botanic gardens might have in plant invasions across the globe. Botanic gardens, often in global biodiversity hotspots, have been implicated in the early cultivation and/or introduction of most environmental weeds listed by IUCN as among the worst invasive species worldwide. Furthermore, most of the popular ornamental species in living collections around the globe have records as alien weeds. Voluntary codes of conduct to prevent the dissemination of invasive plants from botanic gardens have had limited uptake, with few risk assessments undertaken of individual living collections. A stronger global networking of botanic gardens to tackle biological invasions involving public outreach, information sharing and capacity building is a priority to prevent the problems of the past occurring in the future.

  18. MAI statistics estimation and analysis in a DS-CDMA system

    NASA Astrophysics Data System (ADS)

    Alami Hassani, A.; Zouak, M.; Mrabti, M.; Abdi, F.

    2018-05-01

    A primary limitation of Direct Sequence Code Division Multiple Access DS-CDMA link performance and system capacity is multiple access interference (MAI). To examine the performance of CDMA systems in the presence of MAI, i.e., in a multiuser environment, several works assumed that the interference can be approximated by a Gaussian random variable. In this paper, we first develop a new and simple approach to characterize the MAI in a multiuser system. In addition to statistically quantifying the MAI power, the paper also proposes a statistical model for both variance and mean of the MAI for synchronous and asynchronous CDMA transmission. We show that the MAI probability density function (PDF) is Gaussian for the equal-received-energy case and validate it by computer simulations.

  19. Optimizations of a Hardware Decoder for Deep-Space Optical Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Nakashima, Michael A.; Moision, Bruce E.; Hamkins, Jon

    2007-01-01

    The National Aeronautics and Space Administration has developed a capacity approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation (PPM) and an outer convolutional code [or serially concatenated PPM (SCPPM)] for deep-space optical communications. Decoding of this code uses the turbo principle. However, due to the nonbinary property of SCPPM, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a pipeline-friendly 'maxstar top-2' circuit that reduces the max-only approximation penalty, a low-latency cyclic redundancy check circuit for window-based decoders, and a high-speed algorithmic polynomial interleaver that leads to memory savings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kilobits per second from Mars, the SCPPM coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.

  20. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb

    NASA Astrophysics Data System (ADS)

    Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee

    2015-08-01

    Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.

  1. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  2. Superdense coding interleaved with forward error correction

    DOE PAGES

    Humble, Travis S.; Sadlier, Ronald J.

    2016-05-12

    Superdense coding promises increased classical capacity and communication security but this advantage may be undermined by noise in the quantum channel. We present a numerical study of how forward error correction (FEC) applied to the encoded classical message can be used to mitigate against quantum channel noise. By studying the bit error rate under different FEC codes, we identify the unique role that burst errors play in superdense coding, and we show how these can be mitigated against by interleaving the FEC codewords prior to transmission. As a result, we conclude that classical FEC with interleaving is a useful methodmore » to improve the performance in near-term demonstrations of superdense coding.« less

  3. Cooperation in scale-free networks with limited associative capacities

    NASA Astrophysics Data System (ADS)

    Poncela, Julia; Gómez-Gardeñes, Jesús; Moreno, Yamir

    2011-05-01

    In this work we study the effect of limiting the number of interactions (the associative capacity) that a node can establish per round of a prisoner’s dilemma game. We focus on the way this limitation influences the level of cooperation sustained by scale-free networks. We show that when the game includes cooperation costs, limiting the associative capacity of nodes to a fixed quantity renders in some cases larger values of cooperation than in the unrestricted scenario. This allows one to define an optimum capacity for which cooperation is maximally enhanced. Finally, for the case without cooperation costs, we find that even a tight limitation of the associative capacity of nodes yields the same levels of cooperation as in the original network.

  4. Testing the Predictions of the Central Capacity Sharing Model

    ERIC Educational Resources Information Center

    Tombu, Michael; Jolicoeur, Pierre

    2005-01-01

    The divergent predictions of 2 models of dual-task performance are investigated. The central bottleneck and central capacity sharing models argue that a central stage of information processing is capacity limited, whereas stages before and after are capacity free. The models disagree about the nature of this central capacity limitation. The…

  5. Clinical Manifestations, Hematology, and Chemistry Profiles of the Six Most Common Etiologies from an Observational Study of Acute Febrile Illness in Indonesia

    PubMed Central

    Kosasih, Herman; Karyana, Muhammad; Lokida, Dewi; Alisjahbana, Bachti; Tjitra, Emiliana; Gasem, Muhammad Hussein; Aman, Abu Tholib; Merati, Ketut Tuti; Arif, Mansyur; Sudarmono, Pratiwi; Suharto, Suharto; Lisdawati, Vivi; Neal, Aaron; Siddiqui, Sophia

    2017-01-01

    Abstract Background Infectious diseases remain a significant healthcare burden in the developing world. In Indonesia, clinicians often manage and treat patients solely based on clinical presentations since the diagnostic testing capacities of hospitals are limited. Unfortunately, the most common infections in this tropical environment share highly similar manifestations, complicating the identification of etiologies and leading to the misdiagnosis of illness. When pathogen-specific testing is available, generally at top-tier specialist hospitals, the limited range of tests and slow turnaround times may never lead to a definitive diagnosis or improved patient outcomes. Methods To identify clinical parameters that can be used for differentiating the most common causes of fever in Indonesia, we evaluated clinical data from 1,486 acute febrile patients enrolled in a multi-site observational cohort study during 2013 to 2016. Results From the 66% of subjects with confirmed etiologies, the six most common infections were dengue virus (455), Salmonella spp. (124), Rickettsia spp. (109), influenza virus (64), Leptospira spp. (53), and chikungunya virus (37). The accompanying figure shows the clinical signs and symptoms (A) and hematology and blood chemistry results (B) for the color-coded pathogens. Comparing the profiles of all infected subjects reveals parameters that are uniquely associated with particular pathogens, such as leukopenia with dengue virus. Conclusion These observations will assist clinicians in healthcare systems with limited diagnostic testing capacities and may be useful in formulating diagnostic algorithms for Indonesia and other developing countries. Disclosures All authors: No reported disclosures.

  6. Increasing Road Infrastructure Capacity Through the Use of Autonomous Vehicles

    DTIC Science & Technology

    2016-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. INCREASING ROAD ...DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE INCREASING ROAD INFRASTRUCTURE CAPACITY THROUGH THE USE OF AUTONOMOUS VEHICLES 5. FUNDING...driverless vehicles, road infrastructure 15. NUMBER OF PAGES 65 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY

  7. Verbal Short-Term Memory Span in Speech-Disordered Children: Implications for Articulatory Coding in Short-Term Memory.

    ERIC Educational Resources Information Center

    Raine, Adrian; And Others

    1991-01-01

    Children with speech disorders had lower short-term memory capacity and smaller word length effect than control children. Children with speech disorders also had reduced speech-motor activity during rehearsal. Results suggest that speech rate may be a causal determinant of verbal short-term memory capacity. (BC)

  8. 76 FR 5611 - Notice of Availability of the Environmental Assessment for the Short Term Sentences Acquisition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-01

    ... action alternatives and the No Action Alternative. Natural, cultural, and socioeconomic resource impacts.... Cohn, Chief, or Issac J. Gaston, Site Selection Specialist, Capacity Planning and Site Selection Branch..., Capacity Planning and Site Selection Branch. [FR Doc. 2011-1817 Filed 1-31-11; 8:45 am] BILLING CODE P ...

  9. Quantum Kronecker sum-product low-density parity-check codes with finite rate

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Pryadko, Leonid P.

    2013-07-01

    We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.

  10. Methodology and Method and Apparatus for Signaling With Capacity Optimized Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2014-01-01

    Communication systems are described that use geometrically shaped constellations that have increased capacity compared to conventional constellations operating within a similar SNR band. In several embodiments, the geometrically shaped is optimized based upon a capacity measure such as parallel decoding capacity or joint capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.

  11. Optical Vector Receiver Operating Near the Quantum Limit

    NASA Astrophysics Data System (ADS)

    Vilnrotter, V. A.; Lau, C.-W.

    2005-05-01

    An optical receiver concept for binary signals with performance approaching the quantum limit at low average-signal energies is developed and analyzed. A conditionally nulling receiver that reaches the quantum limit in the absence of background photons has been devised by Dolinar. However, this receiver requires ideal optical combining and complicated real-time shaping of the local field; hence, it tends to be difficult to implement at high data rates. A simpler nulling receiver that approaches the quantum limit without complex optical processing, suitable for high-rate operation, had been suggested earlier by Kennedy. Here we formulate a vector receiver concept that incorporates the Kennedy receiver with a physical beamsplitter, but it also utilizes the reflected signal component to improve signal detection. It is found that augmenting the Kennedy receiver with classical coherent detection at the auxiliary beamsplitter output, and optimally processing the vector observations, always improves on the performance of the Kennedy receiver alone, significantly so at low average-photon rates. This is precisely the region of operation where modern codes approach channel capacity. It is also shown that the addition of background radiation has little effect on the performance of the coherent receiver component, suggesting a viable approach for near-quantum-limited performance in high background environments.

  12. ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stillo, Andrew; Ricketts, Craig I.

    High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacitymore » of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)« less

  13. A "theory of relativity" for cognitive elasticity of time and modality dimensions supporting constant working memory capacity: involvement of harmonics among ultradian clocks?

    PubMed

    Glassman, R B

    2000-02-01

    1. The capacity of working memory (WM) for about 7+/-2 ("the magical number") serially organized simple verbal items may represent a fundamental constant of cognition. Indeed, there is the same capacity for sense of familiarity of a number of recently encountered places, observed in radial maze performance both of lab rats and of humans. 2. Moreover, both species show a peculiar capacity for retaining WM of place over delays. The literature also describes paradoxes of extended time duration in certain human verbal recall tasks. Certain bird species have comparable capacity for delayed recall of about 4 to 8 food caches in a laboratory room. 3. In addition to these paradoxes of the time dimension with WM (still sometimes called "short-term" memory) there are another set of paradoxes of dimensionality for human judgment of magnitudes, noted by Miller in his classic 1956 paper on "the magical number." We are able to reliably refer magnitudes to a rating scale of up to about seven divisions. Remarkably, that finding is largely independent of perceptual modality or even of the extent of a linear interval selected within any given modality. 4. These paradoxes suggest that "the magical number 7+/2" depends on fundamental properties of mammalian brains. 5. This paper theorizes that WM numerosity is conserved as a fundamental constant, by means of elasticity of cognitive dimensionality, including the temporal pace of arrival of significant items of cognitive information. 6. A conjectural neural code for WM item-capacity is proposed here, which extends the hypothetical principle of binding-by-synchrony. The hypothesis is that several coactive frequencies of brain electrical rhythms each mark a WM item. 7. If, indeed, WM does involve a brain wave frequency code (perhaps within the gamma frequency range that has often been suggested with the binding hypothesis) mathematical considerations suggest additional relevance of harmonic relationships. That is, if copresent sinusoids bear harmony-like ratios and are confined within a single octave, then they have fast temporal properties, while avoiding spurious difference rhythms. Therefore, if the present hypothesis is valid, it implies a natural limit on parallel processing of separate items in organismic brains. 8. Similar logic of periodic signals may hold for slower ultradian rhythms, including hypothetical ones that contribute to time-tagging and fresh sense of familiarity of a day's event memories. Similar logic may also hold for spatial periodic functions across brain tissue that, hypothetically, represent cognitive information. Thus, harmonic transitions among temporal and spatial periodic functions are a possible vehicle for the cognitive dimensional elasticity that conserves WM capacity. 9. Supporting roles are proposed of (a) basal ganglia, as a high-capacity cache for traces of recent experience temporarily suspended from active task-relevant processing and (b) of hippocampus as a phase and interval comparator for oscillating signals, whose spatiotemporal dynamics are topologically equivalent to a toroidal grid.

  14. 49 CFR 192.201 - Required capacity of pressure relieving and limiting stations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Design of Pipeline Components § 192.201 Required capacity of pressure relieving and limiting stations. (a) Each pressure relief station or pressure limiting station or group of those stations installed to... 49 Transportation 3 2012-10-01 2012-10-01 false Required capacity of pressure relieving and...

  15. 49 CFR 192.201 - Required capacity of pressure relieving and limiting stations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Design of Pipeline Components § 192.201 Required capacity of pressure relieving and limiting stations. (a) Each pressure relief station or pressure limiting station or group of those stations installed to... 49 Transportation 3 2011-10-01 2011-10-01 false Required capacity of pressure relieving and...

  16. 49 CFR 192.201 - Required capacity of pressure relieving and limiting stations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Design of Pipeline Components § 192.201 Required capacity of pressure relieving and limiting stations. (a) Each pressure relief station or pressure limiting station or group of those stations installed to... 49 Transportation 3 2013-10-01 2013-10-01 false Required capacity of pressure relieving and...

  17. 49 CFR 192.201 - Required capacity of pressure relieving and limiting stations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Design of Pipeline Components § 192.201 Required capacity of pressure relieving and limiting stations. (a) Each pressure relief station or pressure limiting station or group of those stations installed to... 49 Transportation 3 2014-10-01 2014-10-01 false Required capacity of pressure relieving and...

  18. Integrated Performance of Next Generation High Data Rate Receiver and AR4JA LDPC Codec for Space Communications

    NASA Technical Reports Server (NTRS)

    Cheng, Michael K.; Lyubarev, Mark; Nakashima, Michael A.; Andrews, Kenneth S.; Lee, Dennis

    2008-01-01

    Low-density parity-check (LDPC) codes are the state-of-the-art in forward error correction (FEC) technology that exhibits capacity approaching performance. The Jet Propulsion Laboratory (JPL) has designed a family of LDPC codes that are similar in structure and therefore, leads to a single decoder implementation. The Accumulate-Repeat-by-4-Jagged- Accumulate (AR4JA) code design offers a family of codes with rates 1/2, 2/3, 4/5 and lengths 1024, 4096, 16384 information bits. Performance is less than one dB from capacity for all combinations.Integrating a stand-alone LDPC decoder with a commercial-off-the-shelf (COTS) receiver faces additional challenges than building a single receiver-decoder unit from scratch. In this work, we outline the issues and show that these additional challenges can be over-come by simple solutions. To demonstrate that an LDPC decoder can be made to work seamlessly with a COTS receiver, we interface an AR4JA LDPC decoder developed on a field-programmable gate array (FPGA) with a modern high data rate receiver and mea- sure the combined receiver-decoder performance. Through optimizations that include an improved frame synchronizer and different soft-symbol scaling algorithms, we show that a combined implementation loss of less than one dB is possible and therefore, most of the coding gain evidence in theory can also be obtained in practice. Our techniques can benefit any modem that utilizes an advanced FEC code.

  19. Mental Capacity Act 2005: statutory principles and key concepts.

    PubMed

    Griffith, Richard; Tengnah, Cassam

    2008-05-01

    The Mental Capacity Act 2005 represents the most significant development in the law relating to people who lack decision making capacity since the Mental Health Act 1959 removed the states parens patriae jurisdiction preventing relatives, courts and government bodies consenting on behalf of incapable adults (F vs West Berkshire HA [1990]). The Mental Capacity Act 2005 impacts on the care and treatment provided by district nurses and it is essential that you have a sound working knowledge of its provisions and code of practice. In the first article of a series focusing on how the Mental Capacity Act 2005 applies to district nurse practice, Richard Griffith and Cassam Tengnah consider the principles and key concepts underpinning the Act.

  20. Bidirectional automatic release of reserve for low voltage network made with low capacity PLCs

    NASA Astrophysics Data System (ADS)

    Popa, I.; Popa, G. N.; Diniş, C. M.; Deaconu, S. I.

    2018-01-01

    The article presents the design of a bidirectional automatic release of reserve made on two types low capacity programmable logic controllers: PS-3 from Klöckner-Moeller and Zelio from Schneider. It analyses the electronic timing circuits that can be used for making the bidirectional automatic release of reserve: time-on delay circuit and time-off delay circuit (two types). In the paper are present the sequences code for timing performed on the PS-3 PLC, the logical functions for the bidirectional automatic release of reserve, the classical control electrical diagram (with contacts, relays, and time relays), the electronic control diagram (with logical gates and timing circuits), the code (in IL language) made for the PS-3 PLC, and the code (in FBD language) made for Zelio PLC. A comparative analysis will be carried out on the use of the two types of PLC and will be present the advantages of using PLCs.

  1. Expanding capacity and promoting inclusion in introductory computer science: a focus on near-peer mentor preparation and code review

    NASA Astrophysics Data System (ADS)

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on regular, consistent feedback via peer code review and inclusive pedagogy. Introductory computer science students provided consistently high ratings of the peer mentors' knowledge, approachability, and flexibility, and credited peer mentor meetings for their strengthened self-efficacy and understanding. Peer mentors noted the value of videotaped simulations with reflection, discussions of inclusion, and the cohort's weekly practicum for improving practice. Adaptations of peer mentoring for different types of institutions are discussed. Computer science educators, with hopes of improving the recruitment and retention of underrepresented groups, can benefit from expanding their peer support infrastructure and improving the quality of peer mentor preparation.

  2. Analysis on applicable error-correcting code strength of storage class memory and NAND flash in hybrid storage

    NASA Astrophysics Data System (ADS)

    Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken

    2018-04-01

    A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.

  3. Airborne antenna radiation pattern code user's manual

    NASA Technical Reports Server (NTRS)

    Burnside, Walter D.; Kim, Jacob J.; Grandchamp, Brett; Rojas, Roberto G.; Law, Philip

    1985-01-01

    The use of a newly developed computer code to analyze the radiation patterns of antennas mounted on a ellipsoid and in the presence of a set of finite flat plates is described. It is shown how the code allows the user to simulate a wide variety of complex electromagnetic radiation problems using the ellipsoid/plates model. The code has the capacity of calculating radiation patterns around an arbitrary conical cut specified by the user. The organization of the code, definition of input and output data, and numerous practical examples are also presented. The analysis is based on the Uniform Geometrical Theory of Diffraction (UTD), and most of the computed patterns are compared with experimental results to show the accuracy of this solution.

  4. Revealing Future Research Capacity from an Analysis of a National Database of Discipline-Coded Australian PhD Thesis Records

    ERIC Educational Resources Information Center

    Pittayachawan, Siddhi; Macauley, Peter; Evans, Terry

    2016-01-01

    This article reports how statistical analyses of PhD thesis records can reveal future research capacities for disciplines beyond their primary fields. The previous research showed that most theses contributed to and/or used methodologies from more than one discipline. In Australia, there was a concern for declining mathematical teaching and…

  5. 49 CFR 179.13 - Tank car capacity and gross weight limitation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 3 2013-10-01 2013-10-01 false Tank car capacity and gross weight limitation. 179... FOR TANK CARS General Design Requirements § 179.13 Tank car capacity and gross weight limitation. Except as provided in this section, tank cars, built after November 30, 1970, or any existing tank cars...

  6. 49 CFR 179.13 - Tank car capacity and gross weight limitation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 3 2012-10-01 2012-10-01 false Tank car capacity and gross weight limitation. 179... FOR TANK CARS General Design Requirements § 179.13 Tank car capacity and gross weight limitation. Except as provided in this section, tank cars, built after November 30, 1970, or any existing tank cars...

  7. 49 CFR 179.13 - Tank car capacity and gross weight limitation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Tank car capacity and gross weight limitation. 179... FOR TANK CARS General Design Requirements § 179.13 Tank car capacity and gross weight limitation. Except as provided in this section, tank cars, built after November 30, 1970, or any existing tank cars...

  8. 49 CFR 179.13 - Tank car capacity and gross weight limitation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 3 2014-10-01 2014-10-01 false Tank car capacity and gross weight limitation. 179... FOR TANK CARS General Design Requirements § 179.13 Tank car capacity and gross weight limitation. Except as provided in this section, tank cars, built after November 30, 1970, or any existing tank cars...

  9. 49 CFR 179.13 - Tank car capacity and gross weight limitation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 3 2011-10-01 2011-10-01 false Tank car capacity and gross weight limitation. 179... FOR TANK CARS General Design Requirements § 179.13 Tank car capacity and gross weight limitation. Except as provided in this section, tank cars, built after November 30, 1970, or any existing tank cars...

  10. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  11. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1991-01-01

    Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.

  12. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  13. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    PubMed

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  14. Three-dimensional quick response code based on inkjet printing of upconversion fluorescent nanoparticles for drug anti-counterfeiting.

    PubMed

    You, Minli; Lin, Min; Wang, Shurui; Wang, Xuemin; Zhang, Ge; Hong, Yuan; Dong, Yuqing; Jin, Guorui; Xu, Feng

    2016-05-21

    Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting.

  15. Characterization of LDPC-coded orbital angular momentum modes transmission and multiplexing over a 50-km fiber.

    PubMed

    Wang, Andong; Zhu, Long; Chen, Shi; Du, Cheng; Mo, Qi; Wang, Jian

    2016-05-30

    Mode-division multiplexing over fibers has attracted increasing attention over the last few years as a potential solution to further increase fiber transmission capacity. In this paper, we demonstrate the viability of orbital angular momentum (OAM) modes transmission over a 50-km few-mode fiber (FMF). By analyzing mode properties of eigen modes in an FMF, we study the inner mode group differential modal delay (DMD) in FMF, which may influence the transmission capacity in long-distance OAM modes transmission and multiplexing. To mitigate the impact of large inner mode group DMD in long-distance fiber-based OAM modes transmission, we use low-density parity-check (LDPC) codes to increase the system reliability. By evaluating the performance of LDPC-coded single OAM mode transmission over 50-km fiber, significant coding gains of >4 dB, 8 dB and 14 dB are demonstrated for 1-Gbaud, 2-Gbaud and 5-Gbaud quadrature phase-shift keying (QPSK) signals, respectively. Furthermore, in order to verify and compare the influence of DMD in long-distance fiber transmission, single OAM mode transmission over 10-km FMF is also demonstrated in the experiment. Finally, we experimentally demonstrate OAM multiplexing and transmission over a 50-km FMF using LDPC-coded 1-Gbaud QPSK signals to compensate the influence of mode crosstalk and DMD in the 50 km FMF.

  16. Capacity limits in list item recognition: evidence from proactive interference.

    PubMed

    Cowan, Nelson; Johnson, Troy D; Saults, J Scott

    2005-01-01

    Capacity limits in short-term recall were investigated using proactive interference (PI) from previous lists in a speeded-recognition task. PI was taken to indicate that the target list length surpassed working memory capacity. Unlike previous studies, words were presented either concurrently or sequentially and a new method was introduced to increase the amount of PI. On average, participants retrieved about four items without PI. We suggest an activation-based account of capacity limits.

  17. Assimilative capacity-based emission load management in a critically polluted industrial cluster.

    PubMed

    Panda, Smaranika; Nagendra, S M Shiva

    2017-12-01

    In the present study, a modified approach was adopted to quantify the assimilative capacity (i.e., the maximum emission an area can take without violating the permissible pollutant standards) of a major industrial cluster (Manali, India) and to assess the effectiveness of adopted air pollution control measures at the region. Seasonal analysis of assimilative capacity was carried out corresponding to critical, high, medium, and low pollution levels to know the best and worst conditions for industrial operations. Bottom-up approach was employed to quantify sulfur dioxide (SO 2 ), nitrogen dioxide (NO 2 ), and particulate matter (aerodynamic diameter <10 μm; PM 10 ) emissions at a fine spatial resolution of 500 × 500 m 2 in Manali industrial cluster. AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model), an U.S. Environmental Protection Agency (EPA) regulatory model, was used for estimating assimilative capacity. Results indicated that 22.8 tonnes/day of SO 2 , 7.8 tonnes/day of NO 2 , and 7.1 tonnes/day of PM 10 were emitted from the industries of Manali. The estimated assimilative capacities for SO 2 , NO 2 , and PM 10 were found to be 16.05, 17.36, and 19.78 tonnes/day, respectively. It was observed that the current SO 2 emissions were exceeding the estimated safe load by 6.7 tonnes/day, whereas PM 10 and NO 2 were within the safe limits. Seasonal analysis of assimilative capacity showed that post-monsoon had the lowest load-carrying capacity, followed by winter, summer, and monsoon seasons, and the allowable SO 2 emissions during post-monsoon and winter seasons were found to be 35% and 26% lower, respectively, when compared with monsoon season. The authors present a modified approach for quantitative estimation of assimilative capacity of a critically polluted Indian industrial cluster. The authors developed a geo-coded fine-resolution PM 10 , NO 2 , and SO 2 emission inventory for Manali industrial area and further quantitatively estimated its season-wise assimilative capacities corresponding to various pollution levels. This quantitative representation of assimilative capacity (in terms of emissions), when compared with routine qualitative representation, provides better data for quantifying carrying capacity of an area. This information helps policy makers and regulatory authorities to develop an effective mitigation plan for air pollution abatement.

  18. Non-coding functions of alternative pre-mRNA splicing in development

    PubMed Central

    Mockenhaupt, Stefan; Makeyev, Eugene V.

    2015-01-01

    A majority of messenger RNA precursors (pre-mRNAs) in the higher eukaryotes undergo alternative splicing to generate more than one mature product. By targeting the open reading frame region this process increases diversity of protein isoforms beyond the nominal coding capacity of the genome. However, alternative splicing also frequently controls output levels and spatiotemporal features of cellular and organismal gene expression programs. Here we discuss how these non-coding functions of alternative splicing contribute to development through regulation of mRNA stability, translational efficiency and cellular localization. PMID:26493705

  19. A qualitative study of regional anaesthesia for vitreo-retinal surgery.

    PubMed

    McCloud, Christine; Harrington, Ann; King, Lindy

    2014-05-01

    The aim of this research was to collect experiential knowledge about regional ocular anaesthesia - an integral component of most vitreo-retinal surgery. Anaesthesia for vitreo-retinal surgery has predominantly used general anaesthesia, because of the length and complexity of the surgical procedure. However, recent advances in surgical instrumentation and techniques have reduced surgical times; this decision has led to the adoption of regional ocular anaesthesia for vitreo-retinal day surgery. Although regional ocular anaesthesia has been studied from several perspectives, knowledge about patients' experience of the procedure is limited. An interpretive qualitative research methodology underpinned by Gadamer's philosophical hermeneutics. Eighteen participants were interviewed in-depth between July 2006-December 2007 following regional ocular anaesthesia. Interview data were thematically analysed by coding and grouping concepts. Four themes were identified: 'not knowing': the time prior to the experience of a regional eye block; 'experiencing': the experience of regional ocular anaesthesia; 'enduring': the capacity participants displayed to endure regional ocular anaesthesia with the hope that their vision would be restored; and 'knowing': when further surgery was required and past experiences were recalled. The experience of regional ocular anaesthesia had the capacity to invoke anxiety in the participants in this study. Many found the experience overwhelming and painful. What became clear was the participant's capacity to stoically 'endure' regional ocular anaesthesia, indicating the value people placed on visual function. © 2013 John Wiley & Sons Ltd.

  20. Thermodynamic Analysis of the Combustion of Metallic Materials

    NASA Technical Reports Server (NTRS)

    Wilson, D. Bruce; Stoltzfus, Joel M.

    2000-01-01

    Two types of computer codes are available to assist in the thermodynamic analysis of metallic materials combustion. One type of code calculates phase equilibrium data and is represented by CALPHAD. The other type of code calculates chemical reaction by the Gordon-McBride code. The first has seen significant application for alloy-phase diagrams, but only recently has it been considered for oxidation systems. The Gordon-McBride code has been applied to the combustion of metallic materials. Both codes are limited by their treatment of non-ideal solutions and the fact they are limited to treating volatile and gaseous species as ideal. This paper examines the significance of these limitations for combustion of metallic materials. In addition, the applicability of linear-free energy relationships for solid-phase oxidation and their possible extension to liquid-phase systems is examined.

  1. Seismic design repair and retrofit strategies for steel roof deck diaphragms

    NASA Astrophysics Data System (ADS)

    Franquet, John-Edward

    Structural engineers will often rely on the roof diaphragm to transfer lateral seismic loads to the bracing system of single-storey structures. The implementation of capacity-based design in the NBCC 2005 has caused an increase in the diaphragm design load due to the need to use the probable capacity of the bracing system, thus resulting in thicker decks, closer connector patterns and higher construction costs. Previous studies have shown that accounting for the in-plane flexibility of the diaphragm when calculating the overall building period can result in lower seismic forces and a more cost-efficient design. However, recent studies estimating the fundamental period of single storey structures using ambient vibration testing showed that the in-situ approximation was much shorter than that obtained using analytical means. The difference lies partially in the diaphragm stiffness characteristics which have been shown to decrease under increasing excitation amplitude. Using the diaphragm as the energy-dissipating element in the seismic force resisting system has also been investigated as this would take advantage of the diaphragm's ductility and limited overstrength; thus, lower capacity based seismic forces would result. An experimental program on 21.0m by 7.31m diaphragm test specimens was carried out so as to investigate the dynamic properties of diaphragms including the stiffness, ductility and capacity. The specimens consisted of 20 and 22 gauge panels with nailed frame fasteners and screwed sidelap connections as well a welded and button-punch specimen. Repair strategies for diaphragms that have previously undergone inelastic deformations were devised in an attempt to restitute the original stiffness and strength and were then experimentally evaluated. Strength and stiffness experimental estimations are compared with those predicted with the Steel Deck Institute (SDI) method. A building design comparative study was also completed. This study looks at the difference in design and cost yielded by previous and current design practice with EBF braced frames. Two alternate design methodologies, where the period is not restricted by code limitations and where the diaphragm force is limited to the equivalent shear force calculated with RdR o = 1.95, are also used for comparison. This study highlights the importance of incorporating the diaphragm stiffness in design and the potential cost savings.

  2. Adaptive Transmission and Channel Modeling for Frequency Hopping Communications

    DTIC Science & Technology

    2009-09-21

    proposed adaptive transmission method has much greater system capacity than conventional non-adaptive MC direct- sequence ( DS )- CDMA system. • We...several mobile radio systems. First, a new improved allocation algorithm was proposed for multicarrier code-division multiple access (MC- CDMA ) system...Multicarrier code-division multiple access (MC- CDMA ) system with adaptive frequency hopping (AFH) has attracted attention of researchers due to its

  3. Feedback Codes and Action Plans: Building the Capacity of First-Year Students to Apply Feedback to a Scientific Report

    ERIC Educational Resources Information Center

    Bird, Fiona L.; Yucel, Robyn

    2015-01-01

    Effective feedback can build self-assessment skills in students so that they become more competent and confident to identify and self-correct weaknesses in their work. In this study, we trialled a feedback code as part of an integrated programme of formative and summative assessment tasks, which provided feedback to first-year students on their…

  4. Leadership, infrastructure and capacity to support child injury prevention: can these concepts help explain differences in injury mortality rankings between 18 countries in Europe?

    PubMed

    MacKay, J Morag; Vincenten, Joanne A

    2012-02-01

    Mortality and morbidity rates, traditionally used indicators for child injury, are limited in their ability to explain differences in child injury between countries, are inadequate in capturing actions to address the problem of child injury and do not adequately identify progress made within countries. There is a need for a broader set of indicators to help better understand the success of countries with low rates of child injury, provide guidance and benchmarks for policy makers looking to make investments to reduce their rates of fatal and non-fatal child injury and allow monitoring of progress towards achieving these goals. This article describes an assessment of national leadership, infrastructure and capacity in the context of child injury prevention in 18 countries in Europe and explores the potential of these to be used as additional indicators to support child injury prevention practice. Partners in 18 countries coordinated data collection on 21 items relating to leadership, infrastructure and capacity. Responses were coded into an overall score and scores for each of the three areas and were compared with child injury mortality rankings using Spearman's rank correlation. Overall score and scores for leadership and capacity were significantly negatively correlated to child injury mortality ranking. Findings of this preliminary work suggest that these three policy areas may provide important guidance for the types of commitments that are needed in the policy arena to support advances in child safety and their assessment a way to measure progress.

  5. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from spacecraft under certain conditions.

  6. The international implications of national and local coordination on building energy codes: Case studies in six cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Yu, Sha; Staniszewski, Aaron

    Building energy efficiency is an important strategy for reducing greenhouse gas emissions globally. In fact, 55 countries have included building energy efficiency in their Nationally Determined Contributions (NDCs) under the Paris Agreement. This research uses building energy code implementation in six cities across different continents as case studies to assess what it may take for countries to implement the ambitions of their energy efficiency goals. Specifically, we look at the cases of Bogota, Colombia; Da Nang, Vietnam; Eskisehir, Turkey; Mexico City, Mexico; Rajkot, India; and Tshwane, South Africa, all of which are “deep dive” cities under the Sustainable Energy formore » All's Building Efficiency Accelerator. The research focuses on understanding the baseline with existing gaps in implementation and coordination. The methodology used a combination of surveys on code status and interviews with stakeholders at the local and national level, as well as review of published documents. We looked at code development, implementation, and evaluation. The cities are all working to improve implementation, however, the challenges they currently face include gaps in resources, capacity, tools, and institutions to check for compliance. Better coordination between national and local governments could help improve implementation, but that coordination is not yet well established. For example, all six of the cities reported that there was little to no involvement of local stakeholders in development of the national code; only one city reported that it had access to national funding to support code implementation. More robust coordination could better link cities with capacity building and funding for compliance, and ensure that the code reflects local priorities. By understanding gaps in implementation, it can also help in designing more targeted interventions to scale up energy savings.« less

  7. The international implications of national and local coordination on building energy codes: Case studies in six cities

    DOE PAGES

    Evans, Meredydd; Yu, Sha; Staniszewski, Aaron; ...

    2018-04-17

    Building energy efficiency is an important strategy for reducing greenhouse gas emissions globally. In fact, 55 countries have included building energy efficiency in their Nationally Determined Contributions (NDCs) under the Paris Agreement. This research uses building energy code implementation in six cities across different continents as case studies to assess what it may take for countries to implement the ambitions of their energy efficiency goals. Specifically, we look at the cases of Bogota, Colombia; Da Nang, Vietnam; Eskisehir, Turkey; Mexico City, Mexico; Rajkot, India; and Tshwane, South Africa, all of which are “deep dive” cities under the Sustainable Energy formore » All's Building Efficiency Accelerator. The research focuses on understanding the baseline with existing gaps in implementation and coordination. The methodology used a combination of surveys on code status and interviews with stakeholders at the local and national level, as well as review of published documents. We looked at code development, implementation, and evaluation. The cities are all working to improve implementation, however, the challenges they currently face include gaps in resources, capacity, tools, and institutions to check for compliance. Better coordination between national and local governments could help improve implementation, but that coordination is not yet well established. For example, all six of the cities reported that there was little to no involvement of local stakeholders in development of the national code; only one city reported that it had access to national funding to support code implementation. More robust coordination could better link cities with capacity building and funding for compliance, and ensure that the code reflects local priorities. By understanding gaps in implementation, it can also help in designing more targeted interventions to scale up energy savings.« less

  8. Integrating collaborative place-based health promotion coalitions into existing health system structures: the experience from one Australian health coalition.

    PubMed

    Ehrlich, Carolyn; Kendall, Elizabeth

    2015-01-01

    Increasingly, place-based collaborative partnerships are being implemented to develop the capacity of communities to build supportive environments and improve population health outcomes. These place-based initiatives require cooperative and coordinated responses that can exist within social systems and integrate multiple responses. However, the dynamic interplay between co-existing systems and new ways of working makes implementation outcomes unpredictable. We interviewed eight programme leaders, three programme teams and two advisory groups to explore the capacity of one social system to implement and normalise a collaborative integrated place-based health promotion initiative in the Logan and Beaudesert area in South East Queensland, Australia. The construct of capacity as defined in the General Theory of Implementation was used to develop a coding framework. Data were then placed into conceptually coherent groupings according to this framework until all data could be accounted for. Four themes defined capacity for implementation of a collaborative and integrated response; namely, the ability to (1) traverse a nested and contradictory social landscape, (2) be a responsive and 'good' community partner, (3) establish the scaffolding required to work 'in place'; and (4) build a shared meaning and engender trust. Overall, we found that the capacity of the system to embed a place-based health promotion initiative was severely limited by the absence of these features. Conflict, disruption and constant change within the context into which the place-based collaborative partnership was being implemented meant that existing relationships were constantly undermined and the capacity of the partners to develop trust-based coherent partnerships was constantly diminished. To enhance the likelihood that collaborative and integrated place-based health promotion initiatives will become established ways of working, an agreed, meaningful and clearly articulated vision and identity are required; goals must be prioritised and negotiated; and sustainable resourcing must be assured.

  9. Models of verbal working memory capacity: what does it take to make them work?

    PubMed

    Cowan, Nelson; Rouder, Jeffrey N; Blume, Christopher L; Saults, J Scott

    2012-07-01

    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multiword units or chunks. Toward a more comprehensive theory of capacity limits, we examined models of forced-choice recognition of words within printed lists, using materials designed to produce multiword chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the interword associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory component unlimited in capacity was needed. A fixed-capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model. Copyright 2012 APA, all rights reserved.

  10. Models of Verbal Working Memory Capacity: What Does It Take to Make Them Work?

    PubMed Central

    Cowan, Nelson; Rouder, Jeffrey N.; Blume, Christopher L.; Saults, J. Scott

    2013-01-01

    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multi-word units or chunks. Toward a more comprehensive theory of capacity limits, we examine models of forced-choice recognition of words within printed lists, using materials designed to produce multi-word chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the inter-word associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory (LTM) component unlimited in capacity was needed. A fixed capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model. PMID:22486726

  11. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    NASA Astrophysics Data System (ADS)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  12. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    NASA Astrophysics Data System (ADS)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  13. Cellular miR-2909 RNomics governs the genes that ensure immune checkpoint regulation.

    PubMed

    Kaul, Deepak; Malik, Deepti; Wani, Sameena

    2018-06-20

    Cross-talk between coding RNAs and regulatory non-coding microRNAs, within human genome, has provided compelling evidence for the existence of flexible checkpoint control of T-Cell activation. The present study attempts to demonstrate that the interplay between miR-2909 and its effector KLF4 gene has the inherent capacity to regulate genes coding for CTLA4, CD28, CD40, CD134, PDL1, CD80, CD86, IL-6 and IL-10 within normal human peripheral blood mononuclear cells (PBMCs). Based upon these findings, we propose a pathway that links miR-2909 RNomics with the genes coding for immune checkpoint regulators required for the maintenance of immune homeostasis.

  14. Vector platforms for gene therapy of inherited retinopathies

    PubMed Central

    Trapani, Ivana; Puppo, Agostina; Auricchio, Alberto

    2014-01-01

    Inherited retinopathies (IR) are common untreatable blinding conditions. Most of them are inherited as monogenic disorders, due to mutations in genes expressed in retinal photoreceptors (PR) and in retinal pigment epithelium (RPE). The retina’s compatibility with gene transfer has made transduction of different retinal cell layers in small and large animal models via viral and non-viral vectors possible. The ongoing identification of novel viruses as well as modifications of existing ones based either on rational design or directed evolution have generated vector variants with improved transduction properties. Dozens of promising proofs of concept have been obtained in IR animal models with both viral and non-viral vectors, and some of them have been relayed to clinical trials. To date, recombinant vectors based on the adeno-associated virus (AAV) represent the most promising tool for retinal gene therapy, given their ability to efficiently deliver therapeutic genes to both PR and RPE and their excellent safety and efficacy profiles in humans. However, AAVs’ limited cargo capacity has prevented application of the viral vector to treatments requiring transfer of genes with a coding sequence larger than 5 kb. Vectors with larger capacity, i.e. nanoparticles, adenoviral and lentiviral vectors are being exploited for gene transfer to the retina in animal models and, more recently, in humans. This review focuses on the available platforms for retinal gene therapy to fight inherited blindness, highlights their main strengths and examines the efforts to overcome some of their limitations. PMID:25124745

  15. 40 CFR 86.1542 - Information required.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...), fuel system (including number of carburetors, number of carburetor barrels, fuel injection type and fuel tank(s) capacity and location), engine code, gross vehicle weight rating, inertia weight class and...

  16. Electrical generating unit inventory 1976-1986: Illinois, Indiana, Kentucky, Ohio, Pennsylvania and West Virginia

    NASA Astrophysics Data System (ADS)

    Jansen, S. D.

    1981-09-01

    The ORBES region consists of all of Kentucky, most of West Virginia, substantial parts of Illinois, Indiana, and Ohio, and southwestern Pennsylvania. The inventory lists installed electrical generating capacity in commercial service as of December 1, 1976, and scheduled capacity additions and removals between 1977 and 1986 in the six ORBES states (Illinois, Indiana, Kentucky, Ohio, Pennsylvania, and West Virginia). The following information is included for each electrical generating unit: unit ID code, company index, whether point or industrial ownership, plant name, whether inside or outside the ORBES region, FIPS county code, type of unit, size in megawatts, type of megawatt rating, status of unit, data of commercial operation, scheduled retirement date, primary fuel, alternate fuel, type of cooling, source of cooling water, and source of information.

  17. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis

    PubMed Central

    Collins, Anne G. E.; Frank, Michael J.

    2012-01-01

    Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033

  18. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  19. An examination of some safety issues among commercial motorcyclists in Nigeria: a case study.

    PubMed

    Arosanyin, Godwin Tunde; Olowosulu, Adekunle Taiwo; Oyeyemi, Gafar Matanmi

    2013-01-01

    The reduction of road crashes and injuries among motorcyclists in Nigeria requires a system inquiry into some safety issues at pre-crash, crash and post-crash stages to guide action plans. This paper examines safety issues such as age restriction, motorcycle engine capacity, highway code awareness, licence holding, helmet usage, crash involvement, rescue and payment for treatment among commercial motorcyclists. The primary data derived from a structured questionnaire administered to 334 commercial motorcyclists in Samaru, Zaria were analysed using descriptive statistics and logistic regression technique. There was total compliance with age restriction and motorcycle engine capacity. About 41.8% of the operators were not aware of the existence of the highway code. The odds of licence holding increased with highway code awareness, education with above senior secondary as the reference category and earnings. The odds of crash involvement decreased with highway code awareness, earnings and mode of operation. About 84% of the motorcyclists did not use crash helmet, in spite of being aware of the benefit, and 65.4% of motorcycle crashes was found to be with other road users. The promotion of safety among motorcyclists therefore requires strict traffic law enforcement and modification of road design to segregate traffic and protect pedestrians.

  20. Dementia, Decision Making, and Capacity.

    PubMed

    Darby, R Ryan; Dickerson, Bradford C

    After participating in this activity, learners should be better able to:• Assess the neuropsychological literature on decision making and the medical and legal assessment of capacity in patients with dementia• Identify the limitations of integrating findings from decision-making research into capacity assessments for patients with dementia ABSTRACT: Medical and legal professionals face the challenge of assessing capacity and competency to make medical, legal, and financial decisions in dementia patients with impaired decision making. While such assessments have classically focused on the capacity for complex reasoning and executive functions, research in decision making has revealed that motivational and metacognitive processes are also important. We first briefly review the neuropsychological literature on decision making and on the medical and legal assessment of capacity. Next, we discuss the limitations of integrating findings from decision-making research into capacity assessments, including the group-to-individual inference problem, the unclear role of neuroimaging in capacity assessments, and the lack of capacity measures that integrate important facets of decision making. Finally, we present several case examples where we attempt to demonstrate the potential benefits and important limitations of using decision-making research to aid in capacity determinations.

  1. Hybrid RAID With Dual Control Architecture for SSD Reliability

    NASA Astrophysics Data System (ADS)

    Chatterjee, Santanu

    2010-10-01

    The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.

  2. Non-coding functions of alternative pre-mRNA splicing in development.

    PubMed

    Mockenhaupt, Stefan; Makeyev, Eugene V

    2015-12-01

    A majority of messenger RNA precursors (pre-mRNAs) in the higher eukaryotes undergo alternative splicing to generate more than one mature product. By targeting the open reading frame region this process increases diversity of protein isoforms beyond the nominal coding capacity of the genome. However, alternative splicing also frequently controls output levels and spatiotemporal features of cellular and organismal gene expression programs. Here we discuss how these non-coding functions of alternative splicing contribute to development through regulation of mRNA stability, translational efficiency and cellular localization. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    NASA Astrophysics Data System (ADS)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  4. Computer model for electrochemical cell performance loss over time in terms of capacity, power, and conductance (CPC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gering, Kevin L.

    2015-09-01

    Available capacity, power, and cell conductance figure centrally into performance characterization of electrochemical cells (such as Li-ion cells) over their service life. For example, capacity loss in Li-ion cells is due to a combination of mechanisms, including loss of free available lithium, loss of active host sites, shifts in the potential-capacity curve, etc. Further distinctions can be made regarding irreversible and reversible capacity loss mechanisms. There are tandem needs for accurate interpretation of capacity at characterization conditions (cycling rate, temperature, etc.) and for robust self-consistent modeling techniques that can be used for diagnostic analysis of cell data as well asmore » forecasting of future performance. Analogous issues exist for aging effects on cell conductance and available power. To address these needs, a modeling capability was developed that provides a systematic analysis of the contributing factors to battery performance loss over aging and to act as a regression/prediction platform for cell performance. The modeling basis is a summation of self-consistent chemical kinetics rate expressions, which as individual expressions each covers a distinct mechanism (e.g., loss of active host sites, lithium loss), but collectively account for the net loss of premier metrics (e.g., capacity) over time for a particular characterization condition. Specifically, sigmoid-based rate expressions are utilized to describe each contribution to performance loss. Through additional mathematical development another tier of expressions is derived and used to perform differential analyses and segregate irreversible versus reversible contributions, as well as to determine concentration profiles over cell aging for affected Li+ ion inventory and fraction of active sites that remain at each time step. Reversible fade components are surmised by comparing fade rates at fast versus slow cycling conditions. The model is easily utilized for predictive calculations so that future capacity performance can be estimated. The invention covers mathematical and theoretical frameworks, and demonstrates application to various Li-ion cells covering test periods that vary in duration, and shows model predictions well past the end of test periods. Version 2.0 Enhancements: the code now covers path-dependent aging scenarios, wherein the framework allows for arbitrarily-chosen aging conditions over a timeline to accommodate prediction of battery aging over a multiplicity of changing conditions. The code framework also allows for cell conductance and power loss evaluations over cell aging, analysis of series strings that contain a thermal anomaly (hot spot), and evaluation of battery thermal management parameters that impact battery lifetimes. Lastly, a comprehensive GUI now resides in the Ver. 2.0 code.« less

  5. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  6. The ALFA (Activity Log Files Aggregation) toolkit: a method for precise observation of the consultation.

    PubMed

    de Lusignan, Simon; Kumarapeli, Pushpa; Chan, Tom; Pflug, Bernhard; van Vlymen, Jeremy; Jones, Beryl; Freeman, George K

    2008-09-08

    There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. To develop a tool kit to measure the impact of different EPR system features on the consultation. We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed a significant difference, with EMIS PCS (Egton Medical Information Systems Limited, Leeds, UK) (P = .007), iSoft Synergy (iSOFT, Banbury, UK) (P = .014), and INPS Vision (INPS, London, UK) (P = .006) facilitating faster coding. In contrast, prescribing was fastest with EMIS LV (mean 23.7 s, 95% CI 20.5-26.8), but nonparametric comparison showed no statistically significant difference. UML sequence diagrams showed that the simplest BP recording interface was not the easiest to use, as users spent longer navigating or looking up previous blood pressures separately. Complex interfaces with free-text boxes left clinicians unsure of what to add. The ALFA method allows the precise observation of the clinical consultation. It enables rigorous comparison of core elements of EPR systems. Pilot data suggests its capacity to demonstrate differences between systems. Its outputs could provide the evidence base for making more objective choices between systems.

  7. Community and District Empowerment for Scale-up (CODES): a complex district-level management intervention to improve child survival in Uganda: study protocol for a randomized controlled trial.

    PubMed

    Waiswa, Peter; O'Connell, Thomas; Bagenda, Danstan; Mullachery, Pricila; Mpanga, Flavia; Henriksson, Dorcus Kiwanuka; Katahoire, Anne Ruhweza; Ssegujja, Eric; Mbonye, Anthony K; Peterson, Stefan Swartling

    2016-03-11

    Innovative and sustainable strategies to strengthen districts and other sub-national health systems and management are urgently required to reduce child mortality. Although highly effective evidence-based and affordable child survival interventions are well-known, at the district level, lack of data, motivation, analytic and planning capacity often impedes prioritization and management weaknesses impede implementation. The Community and District Empowerment for Scale-up (CODES) project is a complex management intervention designed to test whether districts when empowered with data and management tools can prioritize and implement evidence-based child survival interventions equitably. The CODES strategy combines management, diagnostic, and evaluation tools to identify and analyze the causes of bottlenecks to implementation, build capacity of district management teams to implement context-specific solutions, and to foster community monitoring and social accountability to increase demand for services. CODES combines UNICEF tools designed to systematize priority setting, allocation of resources and problem solving with Community dialogues based on Citizen Report Cards and U-Reports used to engage and empower communities in monitoring health service provision and to demand for quality services. Implementation and all data collection will be by the districts teams or local Community-based Organizations who will be supported by two local implementing partners. The study will be evaluated as a cluster randomized trial with eight intervention and eight comparison districts over a period of 3 years. Evaluation will focus on differences in uptake of child survival interventions and will follow an intention-to-treat analysis. We will also document and analyze experiences in implementation including changes in management practices. By increasing the District Health Management Teams' capacity to prioritize and implement context-specific solutions, and empowering communities to become active partners in service delivery, coverage of child survival interventions will increase. Lessons learned on strengthening district-level managerial capacities and mechanisms for community monitoring may have implications, not only in Uganda but also in other similar settings, especially with regard to accelerating effective coverage of key child survival interventions using locally available resources. ISRCTN15705788 , Date of registration; 24 July 2015.

  8. Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2016-01-01

    Design Methodology and Method and Apparatus for Signaling with Capacity Optimized Constellation Abstract Communication systems are described that use geometrically PSK shaped constellations that have increased capacity compared to conventional PSK constellations operating within a similar SNR band. The geometrically shaped PSK constellation is optimized based upon parallel decoding capacity. In many embodiments, a capacity optimized geometrically shaped constellation can be used to replace a conventional constellation as part of a firmware upgrade to transmitters and receivers within a communication system. In a number of embodiments, the geometrically shaped constellation is optimized for an Additive White Gaussian Noise channel or a fading channel. In numerous embodiments, the communication uses adaptive rate encoding and the location of points within the geometrically shaped constellation changes as the code rate changes.

  9. Discrete capacity limits and neuroanatomical correlates of visual short-term memory for objects and spatial locations.

    PubMed

    Konstantinou, Nikos; Constantinidou, Fofi; Kanai, Ryota

    2017-02-01

    Working memory is responsible for keeping information in mind when it is no longer in view, linking perception with higher cognitive functions. Despite such crucial role, short-term maintenance of visual information is severely limited. Research suggests that capacity limits in visual short-term memory (VSTM) are correlated with sustained activity in distinct brain areas. Here, we investigated whether variability in the structure of the brain is reflected in individual differences of behavioral capacity estimates for spatial and object VSTM. Behavioral capacity estimates were calculated separately for spatial and object information using a novel adaptive staircase procedure and were found to be unrelated, supporting domain-specific VSTM capacity limits. Voxel-based morphometry (VBM) analyses revealed dissociable neuroanatomical correlates of spatial versus object VSTM. Interindividual variability in spatial VSTM was reflected in the gray matter density of the inferior parietal lobule. In contrast, object VSTM was reflected in the gray matter density of the left insula. These dissociable findings highlight the importance of considering domain-specific estimates of VSTM capacity and point to the crucial brain regions that limit VSTM capacity for different types of visual information. Hum Brain Mapp 38:767-778, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Signal detection evidence for limited capacity in visual search

    PubMed Central

    Fencsik, David E.; Flusberg, Stephen J.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2014-01-01

    The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models. PMID:21901574

  11. The role of ethical principles in health care and the implications for ethical codes.

    PubMed Central

    Limentani, A E

    1999-01-01

    A common ethical code for everybody involved in health care is desirable, but there are important limitations to the role such a code could play. In order to understand these limitations the approach to ethics using principles and their application to medicine is discussed, and in particular the implications of their being prima facie. The expectation of what an ethical code can do changes depending on how ethical properties in general are understood. The difficulties encountered when ethical values are applied reactively to an objective world can be avoided by seeing them as a more integral part of our understanding of the world. It is concluded that an ethical code can establish important values and describe a common ethical context for health care but is of limited use in solving new and complex ethical problems. PMID:10536764

  12. 47 CFR 52.19 - Area code relief.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... new area codes within their states. Such matters may include, but are not limited to: Directing... realignment; establishing new area code boundaries; establishing necessary dates for the implementation of... code relief planning encompasses all functions related to the implementation of new area codes that...

  13. 47 CFR 52.19 - Area code relief.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... new area codes within their states. Such matters may include, but are not limited to: Directing... realignment; establishing new area code boundaries; establishing necessary dates for the implementation of... code relief planning encompasses all functions related to the implementation of new area codes that...

  14. An efficient decoding for low density parity check codes

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Zhang, Xiaolin; Zhu, Manjie

    2009-12-01

    Low density parity check (LDPC) codes are a class of forward-error-correction codes. They are among the best-known codes capable of achieving low bit error rates (BER) approaching Shannon's capacity limit. Recently, LDPC codes have been adopted by the European Digital Video Broadcasting (DVB-S2) standard, and have also been proposed for the emerging IEEE 802.16 fixed and mobile broadband wireless-access standard. The consultative committee for space data system (CCSDS) has also recommended using LDPC codes in the deep space communications and near-earth communications. It is obvious that LDPC codes will be widely used in wired and wireless communication, magnetic recording, optical networking, DVB, and other fields in the near future. Efficient hardware implementation of LDPC codes is of great interest since LDPC codes are being considered for a wide range of applications. This paper presents an efficient partially parallel decoder architecture suited for quasi-cyclic (QC) LDPC codes using Belief propagation algorithm for decoding. Algorithmic transformation and architectural level optimization are incorporated to reduce the critical path. First, analyze the check matrix of LDPC code, to find out the relationship between the row weight and the column weight. And then, the sharing level of the check node updating units (CNU) and the variable node updating units (VNU) are determined according to the relationship. After that, rearrange the CNU and the VNU, and divide them into several smaller parts, with the help of some assistant logic circuit, these smaller parts can be grouped into CNU during the check node update processing and grouped into VNU during the variable node update processing. These smaller parts are called node update kernel units (NKU) and the assistant logic circuit are called node update auxiliary unit (NAU). With NAUs' help, the two steps of iteration operation are completed by NKUs, which brings in great hardware resource reduction. Meanwhile, efficient techniques have been developed to reduce the computation delay of the node processing units and to minimize hardware overhead for parallel processing. This method may be applied not only to regular LDPC codes, but also to the irregular ones. Based on the proposed architectures, a (7493, 6096) irregular QC-LDPC code decoder is described using verilog hardware design language and implemented on Altera field programmable gate array (FPGA) StratixII EP2S130. The implementation results show that over 20% of logic core size can be saved than conventional partially parallel decoder architectures without any performance degradation. If the decoding clock is 100MHz, the proposed decoder can achieve a maximum (source data) decoding throughput of 133 Mb/s at 18 iterations.

  15. Health research capacity building in Georgia: a case-based needs assessment.

    PubMed

    Squires, A; Chitashvili, T; Djibuti, M; Ridge, L; Chyun, D

    2017-06-01

    Research capacity building in the health sciences in low- and middle-income countries (LMICs) has typically focused on bench-science capacity, but research examining health service delivery and health workforce is equally necessary to determine the best ways to deliver care. The Republic of Georgia, formerly a part of the Soviet Union, has multiple issues within its healthcare system that would benefit from expended research capacity, but the current research environment needs to be explored prior to examining research-focused activities. The purpose of this project was to conduct a needs assessment focused on developing research capacity in the Republic of Georgia with an emphasis on workforce and network development. A case study approach guided by a needs assessment format. We conducted in-country, informal, semi-structured interviews in English with key informants and focus groups with faculty, students, and representatives of local non-governmental organizations. Purposive and snowball sampling approaches were used to recruit participants, with key informant interviews scheduled prior to arrival in country. Documents relevant to research capacity building were also included. Interview results were coded via content analysis. Final results were organized into a SWOT (strengths, weaknesses, opportunities, threat) analysis format, with the report shared with participants. There is widespread interest among students and faculty in Georgia around building research capacity. Lack of funding was identified by many informants as a barrier to research. Many critical research skills, such as proposal development, qualitative research skills, and statistical analysis, were reported as very limited. Participants expressed concerns about the ethics of research, with some suggesting that research is undertaken to punish or 'expose' subjects. However, students and faculty are highly motivated to improve their skills, are open to a variety of learning modalities, and have research priorities aligned with Georgian health needs. This study's findings indicate that while the Georgian research infrastructure needs further development, Georgian students and faculty are eager to supplement its gaps by improving their own skills. These findings are consistent with those seen in other developing country contexts. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  16. Communication, Correlation and Complementarity

    NASA Astrophysics Data System (ADS)

    Schumacher, Benjamin Wade

    1990-01-01

    In quantum communication, a sender prepares a quantum system in a state corresponding to his message and conveys it to a receiver, who performs a measurement on it. The receiver acquires information about the message based on the outcome of his measurement. Since the state of a single quantum system is not always completely determinable from measurement, quantum mechanics limits the information capacity of such channels. According to a theorem of Kholevo, the amount of information conveyed by the channel can be no greater than the entropy of the ensemble of possible physical signals. The connection between information and entropy allows general theorems to be proved regarding the energy requirements of communication. For example, it can be shown that one particular quantum coding scheme, called thermal coding, uses energy with maximum efficiency. A close analogy between communication and quantum correlation can be made using Everett's notion of relative states. Kholevo's theorem can be used to prove that the mutual information of a pair of observables on different systems is bounded by the entropy of the state of each system. This confirms and extends an old conjecture of Everett. The complementarity of quantum observables can be described by information-theoretic uncertainty relations, several of which have been previously derived. These relations imply limits on the degree to which different messages can be coded in complementary observables of a single channel. Complementarity also restricts the amount of information that can be recovered from a given channel using a given decoding observable. Information inequalities can be derived which are analogous to the well-known Bell inequalities for correlated quantum systems. These inequalities are satisfied for local hidden variable theories but are violated by quantum systems, even where the correlation is weak. These information inequalities are metric inequalities for an "information distance", and their structure can be made exactly analogous to that of the familiar covariance Bell inequalities by introducing a "covariance distance". Similar inequalities derived for successive measurements on a single system are also violated in quantum mechanics.

  17. Tinkering with Translation: Protein Synthesis in Virus-Infected Cells

    PubMed Central

    Walsh, Derek; Mathews, Michael B.; Mohr, Ian

    2013-01-01

    Viruses are obligate intracellular parasites, and their replication requires host cell functions. Although the size, composition, complexity, and functions encoded by their genomes are remarkably diverse, all viruses rely absolutely on the protein synthesis machinery of their host cells. Lacking their own translational apparatus, they must recruit cellular ribosomes in order to translate viral mRNAs and produce the protein products required for their replication. In addition, there are other constraints on viral protein production. Crucially, host innate defenses and stress responses capable of inactivating the translation machinery must be effectively neutralized. Furthermore, the limited coding capacity of the viral genome needs to be used optimally. These demands have resulted in complex interactions between virus and host that exploit ostensibly virus-specific mechanisms and, at the same time, illuminate the functioning of the cellular protein synthesis apparatus. PMID:23209131

  18. The biosynthetic capacities of the plastids and integration between cytoplasmic and chloroplast processes.

    PubMed

    Rolland, Norbert; Curien, Gilles; Finazzi, Giovanni; Kuntz, Marcel; Maréchal, Eric; Matringe, Michel; Ravanel, Stéphane; Seigneurin-Berny, Daphné

    2012-01-01

    Plastids are semiautonomous organelles derived from cyanobacterial ancestors. Following endosymbiosis, plastids have evolved to optimize their functions, thereby limiting metabolic redundancy with other cell compartments. Contemporary plastids have also recruited proteins produced by the nuclear genome of the host cell. In addition, many genes acquired from the cyanobacterial ancestor evolved to code for proteins that are targeted to cell compartments other than the plastid. Consequently, metabolic pathways are now a patchwork of enzymes of diverse origins, located in various cell compartments. Because of this, a wide range of metabolites and ions traffic between the plastids and other cell compartments. In this review, we provide a comprehensive analysis of the well-known, and of the as yet uncharacterized, chloroplast/cytosol exchange processes, which can be deduced from what is currently known about compartmentation of plant-cell metabolism.

  19. Unified method of knowledge representation in the evolutionary artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Bykov, Nickolay M.; Bykova, Katherina N.

    2003-03-01

    The evolution of artificial intelligence systems called by complicating of their operation topics and science perfecting has resulted in a diversification of the methods both the algorithms of knowledge representation and usage in these systems. Often by this reason it is very difficult to design the effective methods of knowledge discovering and operation for such systems. In the given activity the authors offer a method of unitized representation of the systems knowledge about objects of an external world by rank transformation of their descriptions, made in the different features spaces: deterministic, probabilistic, fuzzy and other. The proof of a sufficiency of the information about the rank configuration of the object states in the features space for decision making is presented. It is shown that the geometrical and combinatorial model of the rank configurations set introduce their by group of some system of incidence, that allows to store the information on them in a convolute kind. The method of the rank configuration description by the DRP - code (distance rank preserving code) is offered. The problems of its completeness, information capacity, noise immunity and privacy are reviewed. It is shown, that the capacity of a transmission channel for such submission of the information is more than unit, as the code words contain the information both about the object states, and about the distance ranks between them. The effective algorithm of the data clustering for the object states identification, founded on the given code usage, is described. The knowledge representation with the help of the rank configurations allows to unitize and to simplify algorithms of the decision making by fulfillment of logic operations above the DRP - code words. Examples of the proposed clustering techniques operation on the given samples set, the rank configuration of resulted clusters and its DRP-codes are presented.

  20. Operational evaluation of a DGPS / SATCOM VTS : final report

    DOT National Transportation Integrated Search

    1996-09-01

    Satellite communications (SATCOM) using code division multiple access(CDMA) modulation and burst messaging, provided a new dimension to communication channel capacity, operating dependability, and area of coverage. This technology, together with diff...

  1. High levels of time contraction in young children in dual tasks are related to their limited attention capacities.

    PubMed

    Hallez, Quentin; Droit-Volet, Sylvie

    2017-09-01

    Numerous studies have shown that durations are judged shorter in a dual-task condition than in a simple-task condition. The resource-based theory of time perception suggests that this is due to the processing of temporal information, which is a demanding cognitive task that consumes limited attention resources. Our study investigated whether this time contraction in a dual-task condition is greater in younger children and, if so, whether this is specifically related to their limited attention capacities. Children aged 5-7years were given a temporal reproduction task in a simple-task condition and a dual-task condition. In addition, different neuropsychological tests were used to assess not only their attention capacities but also their capacities in terms of working memory and information processing speed. The results showed a shortening of perceived time in the dual task compared with the simple task, and this increased as age decreased. The extent of this shortening effect was directly linked to younger children's limited attentional capacities; the lower their attentional capacities, the greater the time contraction. This study demonstrated that children's errors in time judgments are linked to their cognitive capacities rather than to capacities that are specific to time. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. 32 CFR 935.132 - Speed limits.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Speed limits. 935.132 Section 935.132 National... WAKE ISLAND CODE Motor Vehicle Code § 935.132 Speed limits. Each person operating a motor vehicle on... day, road and weather conditions, the kind of motor vehicle, and the proximity to persons or buildings...

  3. 32 CFR 935.132 - Speed limits.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Speed limits. 935.132 Section 935.132 National... WAKE ISLAND CODE Motor Vehicle Code § 935.132 Speed limits. Each person operating a motor vehicle on... day, road and weather conditions, the kind of motor vehicle, and the proximity to persons or buildings...

  4. 32 CFR 935.132 - Speed limits.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Speed limits. 935.132 Section 935.132 National... WAKE ISLAND CODE Motor Vehicle Code § 935.132 Speed limits. Each person operating a motor vehicle on... day, road and weather conditions, the kind of motor vehicle, and the proximity to persons or buildings...

  5. 32 CFR 935.132 - Speed limits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Speed limits. 935.132 Section 935.132 National... WAKE ISLAND CODE Motor Vehicle Code § 935.132 Speed limits. Each person operating a motor vehicle on Wake Island shall operate it at a speed— (a) That is reasonable, safe, and proper, considering time of...

  6. 32 CFR 935.132 - Speed limits.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Speed limits. 935.132 Section 935.132 National... WAKE ISLAND CODE Motor Vehicle Code § 935.132 Speed limits. Each person operating a motor vehicle on Wake Island shall operate it at a speed— (a) That is reasonable, safe, and proper, considering time of...

  7. International consultation on long-term global health research priorities, research capacity and research uptake in developing countries.

    PubMed

    Conalogue, David Mc; Kinn, Sue; Mulligan, Jo-Ann; McNeil, Malcolm

    2017-03-21

    In recognition of the need for long-term planning for global health research, and to inform future global health research priorities, the United Kingdom Department for International Development (DfID) carried out a public consultation between May and June 2015. The consultation aimed to elicit views on the (1) the long-term future global health research priorities; (2) areas likely to be less important over time; (3) how to improve research uptake in low-income countries; and (4) how to build research capacity in low-income countries. An online consultation was used to survey a wide range of participants on global health research priorities. The qualitative data was analysed using a thematic analysis, with frequency of codes in responses tabulated to approximate relative importance of themes and sub-themes. The public consultation yielded 421 responses. The survey responses confirmed the growing importance of non-communicable disease as a global health research priority, being placed above infectious diseases. Participants felt that the key area for reducing funding prioritisation was infectious diseases. The involvement of policymakers and other key stakeholders was seen as critical to drive research uptake, as was collaboration and partnership. Several methods to build research capacity in low-income countries were described, including capacity building educational programmes, mentorship programmes and research institution collaboration and partnership. The outcomes from this consultation survey provide valuable insights into how DfID stakeholders prioritise research. The outcomes from this survey were reviewed alongside other elements of a wider DfID consultation process to help inform long-term research prioritisation of global health research. There are limitations in this approach; the opportunistic nature of the survey's dissemination means the findings presented may not be representative of the full range of stakeholders or views.

  8. Vastus lateralis surface and single motor unit EMG following submaximal shortening and lengthening contractions.

    PubMed

    Altenburg, Teatske M; de Ruiter, Cornelis J; Verdijk, Peter W L; van Mechelen, Willem; de Haan, Arnold

    2008-12-01

    A single shortening contraction reduces the force capacity of muscle fibers, whereas force capacity is enhanced following lengthening. However, how motor unit recruitment and discharge rate (muscle activation) are adapted to such changes in force capacity during submaximal contractions remains unknown. Additionally, there is limited evidence for force enhancement in larger muscles. We therefore investigated lengthening- and shortening-induced changes in activation of the knee extensors. We hypothesized that when the same submaximal torque had to be generated following shortening, muscle activation had to be increased, whereas a lower activation would suffice to produce the same torque following lengthening. Muscle activation following shortening and lengthening (20 degrees at 10 degrees /s) was determined using rectified surface electromyography (rsEMG) in a 1st session (at 10% and 50% maximal voluntary contraction (MVC)) and additionally with EMG of 42 vastus lateralis motor units recorded in a 2nd session (at 4%-47%MVC). rsEMG and motor unit discharge rates following shortening and lengthening were normalized to isometric reference contractions. As expected, normalized rsEMG (1.15 +/- 0.19) and discharge rate (1.11 +/- 0.09) were higher following shortening (p < 0.05). Following lengthening, normalized rsEMG (0.91 +/- 0.10) was, as expected, lower than 1.0 (p < 0.05), but normalized discharge rate (0.99 +/- 0.08) was not (p > 0.05). Thus, muscle activation was increased to compensate for a reduced force capacity following shortening by increasing the discharge rate of the active motor units (rate coding). In contrast, following lengthening, rsEMG decreased while the discharge rates of active motor units remained similar, suggesting that derecruitment of units might have occurred.

  9. Response of two identical seven-story structures to the San Fernando earthquake of February 9, 1971

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, S.A.; Honda, K.K.

    1973-10-01

    The results of the structural dynamic investigation of two sevenstory reinforced concrete frame structures are presented here. The structures are both Holiday Inn rnotor hotels that are essentially identical: one is locrted about 13 miles and the other about 26 miles from the epicenter of the February 9, 1971, San Fernando earthquake. Appreciable nonstructural damage as well as some structural damage was observed. Strong-motion seismic records were obtained for the roof, intermediate story, and ground floor of each structure. The analyses are based on data from the structural drawings, architectural drawings, photographs, engineering reports, and seisrnogram records obtained before, during,more » and after the San Fernando earthquake. Both structures experienced motion well beyond the limits of the building code design criteria. A change in fundamental period was observed for each structure after several seconds of response to the earthquake, which indicated nonlinear response. The analyses indicated that the elastic capacity of some structural members was exceeded. Idealized linear models were constructed to approximate response at various time segments. A method for approximating the nonlinear response of each structure is presented. The effects of nonstructural elements, yielding beams, and column capacities are illustrated. Comparisons of the two buildings are made for ductility factors, dynarnic response characteristics, and damage. Conclusions are drawn concerning the effects of the earthquake on the structures and the future capacities of the structures. (auth)« less

  10. A class of cellular automata modeling winnerless competition

    NASA Astrophysics Data System (ADS)

    Afraimovich, V.; Ordaz, F. C.; Urías, J.

    2002-06-01

    Neural units introduced by Rabinovich et al. ("Sensory coding with dynamically competitive networks," UCSD and CIT, February 1999) motivate a class of cellular automata (CA) where spatio-temporal encoding is feasible. The spatio-temporal information capacity of a CA is estimated by the information capacity of the attractor set, which happens to be finitely specified. Two-dimensional CA are studied in detail. An example is given for which the attractor is not a subshift.

  11. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.

    1986-01-01

    High rate concatenated coding systems with trellis inner codes and Reed-Solomon (RS) outer codes for application in satellite communication systems are considered. Two types of inner codes are studied: high rate punctured binary convolutional codes which result in overall effective information rates between 1/2 and 1 bit per channel use; and bandwidth efficient signal space trellis codes which can achieve overall effective information rates greater than 1 bit per channel use. Channel capacity calculations with and without side information performed for the concatenated coding system. Concatenated coding schemes are investigated. In Scheme 1, the inner code is decoded with the Viterbi algorithm and the outer RS code performs error-correction only (decoding without side information). In scheme 2, the inner code is decoded with a modified Viterbi algorithm which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, while branch metrics are used to provide the reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. These two schemes are proposed for use on NASA satellite channels. Results indicate that high system reliability can be achieved with little or no bandwidth expansion.

  12. Association of nutritional status and functional capacity in gastrointestinal cancer patients.

    PubMed

    Pérez-Cruz, Elizabeth; Camacho-Limas, Christian Patricio

    To determine the nutritional status and its association with functional capacity in patients with digestive tract cancer. We retrospectively studied all adult patients hospitalized who were diagnosed as having a cancer of the digestive tract. Nutritional status and functional capacity were assessed. Descriptive statistic and odds ratio were used to determine the association in SPSS 14.0. 57 patients were included, 96% had weight loss. Using subjective global assessment (SGA) as a method of screening, 82.5% of the patients were found malnutrition and by biochemical and immunological test 82% and 65% respectively. Functional capacity was assessed by Karnofsky index, finding that 75.5% of the patients have some activity limitation. Results show an association between malnutrition by SGA and limitation in functional capacity (c2 = 1.56; p = 0.212; OR: 2.46; 95% confidence interval [95% CI]: 0.581-10.465). In addition, we observe an association between the total lymphocyte count and limitation in functional capacity (χ2 = 6.94; p = 0.008; OR: 5.23; 95% CI: 1.441-19.025). Malnutrition in patients with digestive tract cancer was associated with limitation in functional capacity. Copyright: © 2017 SecretarÍa de Salud

  13. Quantum Limits of Space-to-Ground Optical Communications

    NASA Technical Reports Server (NTRS)

    Hemmati, H.; Dolinar, S.

    2012-01-01

    For a pure loss channel, the ultimate capacity can be achieved with classical coherent states (i.e., ideal laser light): (1) Capacity-achieving receiver (measurement) is yet to be determined. (2) Heterodyne detection approaches the ultimate capacity at high mean photon numbers. (3) Photon-counting approaches the ultimate capacity at low mean photon numbers. A number of current technology limits drive the achievable performance of free-space communication links. Approaching fundamental limits in the bandwidth-limited regime: (1) Heterodyne detection with high-order coherent-state modulation approaches ultimate limits. SOA improvements to laser phase noise, adaptive optics systems for atmospheric transmission would help. (2) High-order intensity modulation and photon-counting can approach heterodyne detection within approximately a factor of 2. This may have advantages over coherent detection in the presence of turbulence. Approaching fundamental limits in the photon-limited regime (1) Low-duty cycle binary coherent-state modulation (OOK, PPM) approaches ultimate limits. SOA improvements to laser extinction ratio, receiver dark noise, jitter, and blocking would help. (2) In some link geometries (near field links) number-state transmission could improve over coherent-state transmission

  14. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  15. A shared, flexible neural map architecture reflects capacity limits in both visual short-term memory and enumeration.

    PubMed

    Knops, André; Piazza, Manuela; Sengupta, Rakesh; Eger, Evelyn; Melcher, David

    2014-07-23

    Human cognition is characterized by severe capacity limits: we can accurately track, enumerate, or hold in mind only a small number of items at a time. It remains debated whether capacity limitations across tasks are determined by a common system. Here we measure brain activation of adult subjects performing either a visual short-term memory (vSTM) task consisting of holding in mind precise information about the orientation and position of a variable number of items, or an enumeration task consisting of assessing the number of items in those sets. We show that task-specific capacity limits (three to four items in enumeration and two to three in vSTM) are neurally reflected in the activity of the posterior parietal cortex (PPC): an identical set of voxels in this region, commonly activated during the two tasks, changed its overall response profile reflecting task-specific capacity limitations. These results, replicated in a second experiment, were further supported by multivariate pattern analysis in which we could decode the number of items presented over a larger range during enumeration than during vSTM. Finally, we simulated our results with a computational model of PPC using a saliency map architecture in which the level of mutual inhibition between nodes gives rise to capacity limitations and reflects the task-dependent precision with which objects need to be encoded (high precision for vSTM, lower precision for enumeration). Together, our work supports the existence of a common, flexible system underlying capacity limits across tasks in PPC that may take the form of a saliency map. Copyright © 2014 the authors 0270-6474/14/349857-10$15.00/0.

  16. Maximizing the optical network capacity

    PubMed Central

    Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.

    2016-01-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  17. Working Memory Capacity as a Dynamic Process

    PubMed Central

    Simmering, Vanessa R.; Perone, Sammy

    2013-01-01

    A well-known characteristic of working memory (WM) is its limited capacity. The source of such limitations, however, is a continued point of debate. Developmental research is positioned to address this debate by jointly identifying the source(s) of limitations and the mechanism(s) underlying capacity increases. Here we provide a cross-domain survey of studies and theories of WM capacity development, which reveals a complex picture: dozens of studies from 50 papers show nearly universal increases in capacity estimates with age, but marked variation across studies, tasks, and domains. We argue that the full pattern of performance cannot be captured through traditional approaches emphasizing single causes, or even multiple separable causes, underlying capacity development. Rather, we consider WM capacity as a dynamic process that emerges from a unified cognitive system flexibly adapting to the context and demands of each task. We conclude by enumerating specific challenges for researchers and theorists that will need to be met in order to move our understanding forward. PMID:23335902

  18. Models Provide Specificity: Testing a Proposed Mechanism of Visual Working Memory Capacity Development

    ERIC Educational Resources Information Center

    Simmering, Vanessa R.; Patterson, Rebecca

    2012-01-01

    Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…

  19. Short-Term Memory Limitations in Children: Capacity or Processing Deficits?

    ERIC Educational Resources Information Center

    Chi, Michelene T. H.

    1976-01-01

    Evaluates the assertion that short-term memory (STM) capacity increases with age and concludes that the STM capacity limitation in children is due to the deficits in the processing strategies and speeds, which presumably improve with age through cumulative learning. (JM) Available from: Memory and Cognition, Psychonomic Society, 1018 West 34…

  20. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  1. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  2. Capturing Energy-Saving Opportunities: Improving Building Efficiency in Rajasthan through Energy Code Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Qing; Yu, Sha; Evans, Meredydd

    2016-05-01

    India adopted the Energy Conservation Building Code (ECBC) in 2007. Rajasthan is the first state to make ECBC mandatory at the state level. In collaboration with Malaviya National Institute of Technology (MNIT) Jaipur, Pacific Northwest National Laboratory (PNNL) has been working with Rajasthan to facilitate the implementation of ECBC. This report summarizes milestones made in Rajasthan and PNNL's contribution in institutional set-ups, capacity building, compliance enforcement and pilot building construction.

  3. Identification of limit cycles in multi-nonlinearity, multiple path systems

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Barron, O. L.

    1979-01-01

    A method of analysis which identifies limit cycles in autonomous systems with multiple nonlinearities and multiple forward paths is presented. The FORTRAN code for implementing the Harmonic Balance Algorithm is reported. The FORTRAN code is used to identify limit cycles in multiple path and nonlinearity systems while retaining the effects of several harmonic components.

  4. Energy efficient rateless codes for high speed data transfer over free space optical channels

    NASA Astrophysics Data System (ADS)

    Prakash, Geetha; Kulkarni, Muralidhar; Acharya, U. S.

    2015-03-01

    Terrestrial Free Space Optical (FSO) links transmit information by using the atmosphere (free space) as a medium. In this paper, we have investigated the use of Luby Transform (LT) codes as a means to mitigate the effects of data corruption induced by imperfect channel which usually takes the form of lost or corrupted packets. LT codes, which are a class of Fountain codes, can be used independent of the channel rate and as many code words as required can be generated to recover all the message bits irrespective of the channel performance. Achieving error free high data rates with limited energy resources is possible with FSO systems if error correction codes with minimal overheads on the power can be used. We also employ a combination of Binary Phase Shift Keying (BPSK) with provision for modification of threshold and optimized LT codes with belief propagation for decoding. These techniques provide additional protection even under strong turbulence regimes. Automatic Repeat Request (ARQ) is another method of improving link reliability. Performance of ARQ is limited by the number of retransmissions and the corresponding time delay. We prove through theoretical computations and simulations that LT codes consume less energy per bit. We validate the feasibility of using energy efficient LT codes over ARQ for FSO links to be used in optical wireless sensor networks within the eye safety limits.

  5. Addressing the limits to adaptation across four damage--response systems

    EPA Science Inventory

    Our ability to adapt to climate change is not boundless, and previous modeling shows that capacity limited adaptation will play a policy-significant role in future decisions about climate change. These limits are delineated by capacity thresholds, after which climate damages beg...

  6. Maximum likelihood decoding analysis of accumulate-repeat-accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, A.; Divsalar, D.; Yao, K.

    2004-01-01

    In this paper, the performance of the repeat-accumulate codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. Some simple codes are shown that perform very close to Shannon limit with maximum likelihood decoding.

  7. Limits to sustained energy intake. XIII. Recent progress and future perspectives.

    PubMed

    Speakman, John R; Król, Elżbieta

    2011-01-15

    Several theories have been proposed to explain limits on the maximum rate at which animals can ingest and expend energy. These limits are likely to be intrinsic to the animal, and potentially include the capacity of the alimentary tract to assimilate energy--the 'central limitation' hypothesis. Experimental evidence from lactating mice exposed to different ambient temperatures allows us to reject this and similar ideas. Two alternative ideas have been proposed. The 'peripheral limitation' hypothesis suggests that the maximal sustained energy intake reflects the summed demands of individual tissues, which have their own intrinsic limitations on capacity. In contrast, the 'heat dissipation limit' (HDL) theory suggests that animals are constrained by the maximal capacity to dissipate body heat. Abundant evidence in domesticated livestock supports the HDL theory, but data from smaller mammals are less conclusive. Here, we develop a novel framework showing how the HDL and peripheral limitations are likely to be important in all animals, but to different extents. The HDL theory makes a number of predictions--in particular that there is no fixed limit on sustained energy expenditure as a multiple of basal metabolic rate, but rather that the maximum sustained scope is positively correlated with the capacity to dissipate heat.

  8. Flexible cognitive resources: competitive content maps for attention and memory

    PubMed Central

    Franconeri, Steven L.; Alvarez, George A.; Cavanagh, Patrick

    2013-01-01

    The brain has finite processing resources so that, as tasks become harder, performance degrades. Where do the limits on these resources come from? We focus on a variety of capacity-limited buffers related to attention, recognition, and memory that we claim have a two-dimensional ‘map’ architecture, where individual items compete for cortical real estate. This competitive format leads to capacity limits that are flexible, set by the nature of the content and their locations within an anatomically delimited space. We contrast this format with the standard ‘slot’ architecture and its fixed capacity. Using visual spatial attention and visual short-term memory as case studies, we suggest that competitive maps are a concrete and plausible architecture that limits cognitive capacity across many domains. PMID:23428935

  9. Prevalence of Prescription Opioid Misuse/Abuse as Determined by International Classification of Diseases Codes: A Systematic Review.

    PubMed

    Roland, Carl L; Lake, Joanita; Oderda, Gary M

    2016-12-01

    We conducted a systematic review to evaluate worldwide human English published literature from 2009 to 2014 on prevalence of opioid misuse/abuse in retrospective databases where International Classification of Diseases (ICD) codes were used. Inclusion criteria for the studies were use of a retrospective database, measured abuse, dependence, and/or poisoning using ICD codes, stated prevalence or it could be derived, and documented time frame. A meta-analysis was not performed. A qualitative narrative synthesis was used, and 16 studies were included for data abstraction. ICD code use varies; 10 studies used ICD codes that encompassed all three terms: abuse, dependence, or poisoning. Eight studies limited determination of misuse/abuse to an opioid user population. Abuse prevalence among opioid users in commercial databases using all three terms of ICD codes varied depending on the opioid; 21 per 1000 persons (reformulated extended-release oxymorphone; 2011-2012) to 113 per 1000 persons (immediate-release opioids; 2010-2011). Abuse prevalence in general populations using all three ICD code terms ranged from 1.15 per 1000 persons (commercial; 6 months 2010) to 8.7 per 1000 persons (Medicaid; 2002-2003). Prevalence increased over time. When similar ICD codes are used, the highest prevalence is in US government-insured populations. Limiting population to continuous opioid users increases prevalence. Prevalence varies depending on ICD codes used, population, time frame, and years studied. Researchers using ICD codes to determine opioid abuse prevalence need to be aware of cautions and limitations.

  10. An Integrated Magnetic Circuit Model and Finite Element Model Approach to Magnetic Bearing Design

    NASA Technical Reports Server (NTRS)

    Provenza, Andrew J.; Kenny, Andrew; Palazzolo, Alan B.

    2003-01-01

    A code for designing magnetic bearings is described. The code generates curves from magnetic circuit equations relating important bearing performance parameters. Bearing parameters selected from the curves by a designer to meet the requirements of a particular application are input directly by the code into a three-dimensional finite element analysis preprocessor. This means that a three-dimensional computer model of the bearing being developed is immediately available for viewing. The finite element model solution can be used to show areas of magnetic saturation and make more accurate predictions of the bearing load capacity, current stiffness, position stiffness, and inductance than the magnetic circuit equations did at the start of the design process. In summary, the code combines one-dimensional and three-dimensional modeling methods for designing magnetic bearings.

  11. Colour-barcoded magnetic microparticles for multiplexed bioassays.

    PubMed

    Lee, Howon; Kim, Junhoi; Kim, Hyoki; Kim, Jiyun; Kwon, Sunghoon

    2010-09-01

    Encoded particles have a demonstrated value for multiplexed high-throughput bioassays such as drug discovery and clinical diagnostics. In diverse samples, the ability to use a large number of distinct identification codes on assay particles is important to increase throughput. Proper handling schemes are also needed to readout these codes on free-floating probe microparticles. Here we create vivid, free-floating structural coloured particles with multi-axis rotational control using a colour-tunable magnetic material and a new printing method. Our colour-barcoded magnetic microparticles offer a coding capacity easily into the billions with distinct magnetic handling capabilities including active positioning for code readouts and active stirring for improved reaction kinetics in microscale environments. A DNA hybridization assay is done using the colour-barcoded magnetic microparticles to demonstrate multiplexing capabilities.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  13. Adaptive beamforming in a CDMA mobile satellite communications system

    NASA Technical Reports Server (NTRS)

    Munoz-Garcia, Samuel G.

    1993-01-01

    Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.

  14. Adaptive beamforming in a CDMA mobile satellite communications system

    NASA Astrophysics Data System (ADS)

    Munoz-Garcia, Samuel G.

    Code-Division Multiple-Access (CDMA) stands out as a strong contender for the choice of multiple access scheme in these future mobile communication systems. This is due to a variety of reasons such as the excellent performance in multipath environments, high scope for frequency reuse and graceful degradation near saturation. However, the capacity of CDMA is limited by the self-interference between the transmissions of the different users in the network. Moreover, the disparity between the received power levels gives rise to the near-far problem, this is, weak signals are severely degraded by the transmissions from other users. In this paper, the use of time-reference adaptive digital beamforming on board the satellite is proposed as a means to overcome the problems associated with CDMA. This technique enables a high number of independently steered beams to be generated from a single phased array antenna, which automatically track the desired user signal and null the unwanted interference sources. Since CDMA is interference limited, the interference protection provided by the antenna converts directly and linearly into an increase in capacity. Furthermore, the proposed concept allows the near-far effect to be mitigated without requiring a tight coordination of the users in terms of power control. A payload architecture will be presented that illustrates the practical implementation of this concept. This digital payload architecture shows that with the advent of high performance CMOS digital processing, the on-board implementation of complex DSP techniques -in particular digital beamforming- has become possible, being most attractive for Mobile Satellite Communications.

  15. The impact of rural hospital closures on equity of commuting time for haemodialysis patients: simulation analysis using the capacity-distance model.

    PubMed

    Matsumoto, Masatoshi; Ogawa, Takahiko; Kashima, Saori; Takeuchi, Keisuke

    2012-07-23

    Frequent and long-term commuting is a requirement for dialysis patients. Accessibility thus affects their quality of lives. In this paper, a new model for accessibility measurement is proposed in which both geographic distance and facility capacity are taken into account. Simulation of closure of rural facilities and that of capacity transfer between urban and rural facilities are conducted to evaluate the impacts of these phenomena on equity of accessibility among dialysis patients. Post code information as of August 2011 of all the 7,374 patients certified by municipalities of Hiroshima prefecture as having first or third grade renal disability were collected. Information on post code and the maximum number of outpatients (capacity) of all the 98 dialysis facilities were also collected. Using geographic information systems, patient commuting times were calculated in two models: one that takes into account road distance (distance model), and the other that takes into account both the road distance and facility capacity (capacity-distance model). Simulations of closures of rural and urban facilities were then conducted. The median commuting time among rural patients was more than twice as long as that among urban patients (15 versus 7 minutes, p<0.001). In the capacity-distance model 36.1% of patients commuted to the facilities which were different from the facilities in the distance model, creating a substantial gap of commuting time between the two models. In the simulation, when five rural public facilitiess were closed, Gini coefficient of commuting times among the patients increased by 16%, indicating a substantial worsening of equity, and the number of patients with commuting times longer than 90 minutes increased by 72 times. In contrast, closure of four urban public facilities with similar capacities did not affect these values. Closures of dialysis facilities in rural areas have a substantially larger impact on equity of commuting times among dialysis patients than closures of urban facilities. The accessibility simulations using the capacity-distance model will provide an analytic framework upon which rational resource distribution policies might be planned.

  16. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.

    ERIC Educational Resources Information Center

    Miller, George A.

    1994-01-01

    Capacity limitations in absolute judgment tasks are discussed in relation to information theory. Information theory can provide a quantitative way of resolving questions about limitations on the amount of information we can receive and the process of recoding. (SLD)

  17. The next-generation ESL continuum gyrokinetic edge code

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.

    2009-05-01

    The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.

  18. Limited capacity for contour curvature in iconic memory.

    PubMed

    Sakai, Koji

    2006-06-01

    We measured the difference threshold for contour curvature in iconic memory by using the cued discrimination method. The study stimulus consisting of 2 to 6 curved contours was briefly presented in the fovea, followed by two lines as cues. Subjects discriminated the curvature of two cued curves. The cue delays were 0 msec. and 300 msec. in Exps. 1 and 2, respectively, and 50 msec. before the study offset in Exp. 3. Analysis of data from Exps. 1 and 2 showed that the Weber fraction rose monotonically with the increase in set size. Clear set-size effects indicate that iconic memory has a limited capacity. Moreover, clear set-size effect in Exp. 3 indicates that perception itself has a limited capacity. Larger set-size effects in Exp. 1 than in Exp. 3 suggest that iconic memory after perceptual process has limited capacity. These properties of iconic memory at threshold level are contradictory to the traditional view that iconic memory has a high capacity both at suprathreshold and categorical levels.

  19. Visual Working Memory Capacity: From Psychophysics and Neurobiology to Individual Differences

    PubMed Central

    Luck, Steven J.; Vogel, Edward K.

    2013-01-01

    Visual working memory capacity is of great interest because it is strongly correlated with overall cognitive ability, can be understood at the level of neural circuits, and is easily measured. Recent studies have shown that capacity influences tasks ranging from saccade targeting to analogical reasoning. A debate has arisen over whether capacity is constrained by a limited number of discrete representations or by an infinitely divisible resource, but the empirical evidence and neural network models currently favor a discrete item limit. Capacity differs markedly across individuals and groups, and recent research indicates that some of these differences reflect true differences in storage capacity whereas others reflect variations in the ability to use memory capacity efficiently. PMID:23850263

  20. Foundational numerical capacities and the origins of dyscalculia.

    PubMed

    Butterworth, Brian

    2010-12-01

    One important cause of very low attainment in arithmetic (dyscalculia) seems to be a core deficit in an inherited foundational capacity for numbers. According to one set of hypotheses, arithmetic ability is built on an inherited system responsible for representing approximate numerosity. One account holds that this is supported by a system for representing exactly a small number (less than or equal to four4) of individual objects. In these approaches, the core deficit in dyscalculia lies in either of these systems. An alternative proposal holds that the deficit lies in an inherited system for sets of objects and operations on them (numerosity coding) on which arithmetic is built. I argue that a deficit in numerosity coding, not in the approximate number system or the small number system, is responsible for dyscalculia. Nevertheless, critical tests should involve both longitudinal studies and intervention, and these have yet to be carried out. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  2. Brief surgical procedure code lists for outcomes measurement and quality improvement in resource-limited settings.

    PubMed

    Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul

    2017-11-01

    The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Color-Coded Batteries - Electro-Photonic Inverse Opal Materials for Enhanced Electrochemical Energy Storage and Optically Encoded Diagnostics.

    PubMed

    O'Dwyer, Colm

    2016-07-01

    For consumer electronic devices, long-life, stable, and reasonably fast charging Li-ion batteries with good stable capacities are a necessity. For exciting and important advances in the materials that drive innovations in electrochemical energy storage (EES), modular thin-film solar cells, and wearable, flexible technology of the future, real-time analysis and indication of battery performance and health is crucial. Here, developments in color-coded assessment of battery material performance and diagnostics are described, and a vision for using electro-photonic inverse opal materials and all-optical probes to assess, characterize, and monitor the processes non-destructively in real time are outlined. By structuring any cathode or anode material in the form of a photonic crystal or as a 3D macroporous inverse opal, color-coded "chameleon" battery-strip electrodes may provide an amenable way to distinguish the type of process, the voltage, material and chemical phase changes, remaining capacity, cycle health, and state of charge or discharge of either existing or new materials in Li-ion or emerging alternative battery types, simply by monitoring its color change. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Position-based coding and convex splitting for private communication over quantum channels

    NASA Astrophysics Data System (ADS)

    Wilde, Mark M.

    2017-10-01

    The classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender X, a legitimate quantum receiver B, and a quantum eavesdropper E. The goal of a private communication protocol that uses such a channel is for the sender X to transmit a message in such a way that the legitimate receiver B can decode it reliably, while the eavesdropper E learns essentially nothing about which message was transmitted. The ɛ -one-shot private capacity of a cq wiretap channel is equal to the maximum number of bits that can be transmitted over the channel, such that the privacy error is no larger than ɛ \\in (0,1). The present paper provides a lower bound on the ɛ -one-shot private classical capacity, by exploiting the recently developed techniques of Anshu, Devabathini, Jain, and Warsi, called position-based coding and convex splitting. The lower bound is equal to a difference of the hypothesis testing mutual information between X and B and the "alternate" smooth max-information between X and E. The one-shot lower bound then leads to a non-trivial lower bound on the second-order coding rate for private classical communication over a memoryless cq wiretap channel.

  5. 76 FR 38160 - Pesticide Products; Registration Applications

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    .... Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532... classification/Use: For control of certain diseases in almond, grape (small fruit vine climbing group, except...

  6. Factors Affecting Nickel-oxide Electrode Capacity in Nickel-hydrogen Cells

    NASA Technical Reports Server (NTRS)

    Ritterman, P. F.

    1984-01-01

    The nickel-oxide electrode common to the nickel hydrogen and nickel cadmium cell is by design the limiting or capacity determining electrode on both charge and discharge. The useable discharge capacity from this electrode, and since it is the limiting electrode, the useable discharge capacity of the cell as well, can and is optimized by rate of charge, charge temperature and additives to electrode and electrolyte. Recent tests with nickel hydrogen cells and tests performed almost 25 years ago with nickel cadmium cells indicate an improvement of capacity as a result of using increased electrolyte concentration.

  7. Physical-layer security analysis of a quantum-noise randomized cipher based on the wire-tap channel model.

    PubMed

    Jiao, Haisong; Pu, Tao; Zheng, Jilin; Xiang, Peng; Fang, Tao

    2017-05-15

    The physical-layer security of a quantum-noise randomized cipher (QNRC) system is, for the first time, quantitatively evaluated with secrecy capacity employed as the performance metric. Considering quantum noise as a channel advantage for legitimate parties over eavesdroppers, the specific wire-tap models for both channels of the key and data are built with channel outputs yielded by quantum heterodyne measurement; the general expressions of secrecy capacities for both channels are derived, where the matching codes are proved to be uniformly distributed. The maximal achievable secrecy rate of the system is proposed, under which secrecy of both the key and data is guaranteed. The influences of various system parameters on secrecy capacities are assessed in detail. The results indicate that QNRC combined with proper channel codes is a promising framework of secure communication for long distance with high speed, which can be orders of magnitude higher than the perfect secrecy rates of other encryption systems. Even if the eavesdropper intercepts more signal power than the legitimate receiver, secure communication (up to Gb/s) can still be achievable. Moreover, the secrecy of running key is found to be the main constraint to the systemic maximal secrecy rate.

  8. Modulation and coding for throughput-efficient optical free-space links

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.

    1993-01-01

    Optical direct-detection systems are currently being considered for some high-speed inter-satellite links, where data-rates of a few hundred megabits per second are evisioned under power and pulsewidth constraints. In this paper we investigate the capacity, cutoff-rate and error-probability performance of uncoded and trellis-coded systems for various modulation schemes and under various throughput and power constraints. Modulation schemes considered are on-off keying (OOK), pulse-position modulation (PPM), overlapping PPM (OPPM) and multi-pulse (combinatorial) PPM (MPPM).

  9. Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements

    NASA Astrophysics Data System (ADS)

    Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.

    2017-02-01

    Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. We report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. We demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ±0.018 , and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.

  10. Physical-Layer Network Coding for VPN in TDM-PON

    NASA Astrophysics Data System (ADS)

    Wang, Qike; Tse, Kam-Hon; Chen, Lian-Kuan; Liew, Soung-Chang

    2012-12-01

    We experimentally demonstrate a novel optical physical-layer network coding (PNC) scheme over time-division multiplexing (TDM) passive optical network (PON). Full-duplex error-free communications between optical network units (ONUs) at 2.5 Gb/s are shown for all-optical virtual private network (VPN) applications. Compared to the conventional half-duplex communications set-up, our scheme can increase the capacity by 100% with power penalty smaller than 3 dB. Synchronization of two ONUs is not required for the proposed VPN scheme

  11. Hierarchical image coding with diamond-shaped sub-bands

    NASA Technical Reports Server (NTRS)

    Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken

    1992-01-01

    We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.

  12. Implementation of a Medication Reconciliation Assistive Technology: A Qualitative Analysis

    PubMed Central

    Wright, Theodore B.; Adams, Kathleen; Church, Victoria L.; Ferraro, Mimi; Ragland, Scott; Sayers, Anthony; Tallett, Stephanie; Lovejoy, Travis; Ash, Joan; Holahan, Patricia J.; Lesselroth, Blake J.

    2017-01-01

    Objective: To aid the implementation of a medication reconciliation process within a hybrid primary-specialty care setting by using qualitative techniques to describe the climate of implementation and provide guidance for future projects. Methods: Guided by McMullen et al’s Rapid Assessment Process1, we performed semi-structured interviews prior to and iteratively throughout the implementation. Interviews were coded and analyzed using grounded theory2 and cross-examined for validity. Results: We identified five barriers and five facilitators that impacted the implementation. Facilitators identified were process alignment with user values, and motivation and clinical champions fostered by the implementation team rather than the administration. Barriers included a perceived limited capacity for change, diverging priorities, and inconsistencies in process standards and role definitions. Discussion: A more complete, qualitative understanding of existing barriers and facilitators helps to guide critical decisions on the design and implementation of a successful medication reconciliation process. PMID:29854251

  13. Signal Processing for Metagenomics: Extracting Information from the Soup

    PubMed Central

    Rosen, Gail L.; Sokhansanj, Bahrad A.; Polikar, Robi; Bruns, Mary Ann; Russell, Jacob; Garbarine, Elaine; Essinger, Steve; Yok, Non

    2009-01-01

    Traditionally, studies in microbial genomics have focused on single-genomes from cultured species, thereby limiting their focus to the small percentage of species that can be cultured outside their natural environment. Fortunately, recent advances in high-throughput sequencing and computational analyses have ushered in the new field of metagenomics, which aims to decode the genomes of microbes from natural communities without the need for cultivation. Although metagenomic studies have shed a great deal of insight into bacterial diversity and coding capacity, several computational challenges remain due to the massive size and complexity of metagenomic sequence data. Current tools and techniques are reviewed in this paper which address challenges in 1) genomic fragment annotation, 2) phylogenetic reconstruction, 3) functional classification of samples, and 4) interpreting complementary metaproteomics and metametabolomics data. Also surveyed are important applications of metagenomic studies, including microbial forensics and the roles of microbial communities in shaping human health and soil ecology. PMID:20436876

  14. Picornavirus Modification of a Host mRNA Decay Protein

    PubMed Central

    Rozovics, Janet M.; Chase, Amanda J.; Cathcart, Andrea L.; Chou, Wayne; Gershon, Paul D.; Palusa, Saiprasad; Wilusz, Jeffrey; Semler, Bert L.

    2012-01-01

    ABSTRACT Due to the limited coding capacity of picornavirus genomic RNAs, host RNA binding proteins play essential roles during viral translation and RNA replication. Here we describe experiments suggesting that AUF1, a host RNA binding protein involved in mRNA decay, plays a role in the infectious cycle of picornaviruses such as poliovirus and human rhinovirus. We observed cleavage of AUF1 during poliovirus or human rhinovirus infection, as well as interaction of this protein with the 5′ noncoding regions of these viral genomes. Additionally, the picornavirus proteinase 3CD, encoded by poliovirus or human rhinovirus genomic RNAs, was shown to cleave all four isoforms of recombinant AUF1 at a specific N-terminal site in vitro. Finally, endogenous AUF1 was found to relocalize from the nucleus to the cytoplasm in poliovirus-infected HeLa cells to sites adjacent to (but distinct from) putative viral RNA replication complexes. PMID:23131833

  15. Estimation of the behavior factor of existing RC-MRF buildings

    NASA Astrophysics Data System (ADS)

    Vona, Marco; Mastroberti, Monica

    2018-01-01

    In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.

  16. General phase spaces: from discrete variables to rotor and continuum limits

    NASA Astrophysics Data System (ADS)

    Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.

    2017-12-01

    We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.

  17. Attentional Demands Predict Short-Term Memory Load Response in Posterior Parietal Cortex

    ERIC Educational Resources Information Center

    Magen, Hagit; Emmanouil, Tatiana-Aloi; McMains, Stephanie A.; Kastner, Sabine; Treisman, Anne

    2009-01-01

    Limits to the capacity of visual short-term memory (VSTM) indicate a maximum storage of only 3 or 4 items. Recently, it has been suggested that activity in a specific part of the brain, the posterior parietal cortex (PPC), is correlated with behavioral estimates of VSTM capacity and might reflect a capacity-limited store. In three experiments that…

  18. Capacities of quantum amplifier channels

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Wilde, Mark M.

    2017-01-01

    Quantum amplifier channels are at the core of several physical processes. Not only do they model the optical process of spontaneous parametric down-conversion, but the transformation corresponding to an amplifier channel also describes the physics of the dynamical Casimir effect in superconducting circuits, the Unruh effect, and Hawking radiation. Here we study the communication capabilities of quantum amplifier channels. Invoking a recently established minimum output-entropy theorem for single-mode phase-insensitive Gaussian channels, we determine capacities of quantum-limited amplifier channels in three different scenarios. First, we establish the capacities of quantum-limited amplifier channels for one of the most general communication tasks, characterized by the trade-off between classical communication, quantum communication, and entanglement generation or consumption. Second, we establish capacities of quantum-limited amplifier channels for the trade-off between public classical communication, private classical communication, and secret key generation. Third, we determine the capacity region for a broadcast channel induced by the quantum-limited amplifier channel, and we also show that a fully quantum strategy outperforms those achieved by classical coherent-detection strategies. In all three scenarios, we find that the capacities significantly outperform communication rates achieved with a naive time-sharing strategy.

  19. Maximizing the optical network capacity.

    PubMed

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-06

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.

  20. Baculovirus-based genome editing in primary cells.

    PubMed

    Mansouri, Maysam; Ehsaei, Zahra; Taylor, Verdon; Berger, Philipp

    2017-03-01

    Genome editing in eukaryotes became easier in the last years with the development of nucleases that induce double strand breaks in DNA at user-defined sites. CRISPR/Cas9-based genome editing is currently one of the most powerful strategies. In the easiest case, a nuclease (e.g. Cas9) and a target defining guide RNA (gRNA) are transferred into a target cell. Non-homologous end joining (NHEJ) repair of the DNA break following Cas9 cleavage can lead to inactivation of the target gene. Specific repair or insertion of DNA with Homology Directed Repair (HDR) needs the simultaneous delivery of a repair template. Recombinant Lentivirus or Adenovirus genomes have enough capacity for a nuclease coding sequence and the gRNA but are usually too small to also carry large targeting constructs. We recently showed that a baculovirus-based multigene expression system (MultiPrime) can be used for genome editing in primary cells since it possesses the necessary capacity to carry the nuclease and gRNA expression constructs and the HDR targeting sequences. Here we present new Acceptor plasmids for MultiPrime that allow simplified cloning of baculoviruses for genome editing and we show their functionality in primary cells with limited life span and induced pluripotent stem cells (iPS). Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Accelerometry-Based Activity Recognition and Assessment in Rheumatic and Musculoskeletal Diseases.

    PubMed

    Billiet, Lieven; Swinnen, Thijs Willem; Westhovens, Rene; de Vlam, Kurt; Van Huffel, Sabine

    2016-12-16

    One of the important aspects to be considered in rheumatic and musculoskeletal diseases is the patient's activity capacity (or performance), defined as the ability to perform a task. Currently, it is assessed by physicians or health professionals mainly by means of a patient-reported questionnaire, sometimes combined with the therapist's judgment on performance-based tasks. This work introduces an approach to assess the activity capacity at home in a more objective, yet interpretable way. It offers a pilot study on 28 patients suffering from axial spondyloarthritis (axSpA) to demonstrate its efficacy. Firstly, a protocol is introduced to recognize a limited set of six transition activities in the home environment using a single accelerometer. To this end, a hierarchical classifier with the rejection of non-informative activity segments has been developed drawing on both direct pattern recognition and statistical signal features. Secondly, the recognized activities should be assessed, similarly to the scoring performed by patients themselves. This is achieved through the interval coded scoring (ICS) system, a novel method to extract an interpretable scoring system from data. The activity recognition reaches an average accuracy of 93.5%; assessment is currently 64.3% accurate. These results indicate the potential of the approach; a next step should be its validation in a larger patient study.

  2. How female community health workers navigate work challenges and why there are still gaps in their performance: a look at female community health workers in maternal and child health in two Indian districts through a reciprocal determinism framework.

    PubMed

    Sarin, Enisha; Lunsford, Sarah Smith

    2017-07-01

    Accredited Social Health Activists (ASHAs) are community health workers tasked to deliver health prevention in communities and link them with the health care sector. This paper examines the social, cultural, and institutional influences that either facilitate or impede ASHAs' abilities to deliver services effectively through the lens of the reciprocal determinism framework of social cognitive theory. We conducted 98 semi-structured, in-depth interviews with ASHAs (n = 49) and their family members (n = 49) in Gurdaspur and Mewat districts. Data were analyzed by comparing and contrasting codes leading to the identification of patterns which were explained with the help of a theoretical framework. We found that while the work of ASHAs led to some positive health changes in the community, thus providing them with a sense of self-worth and motivation, community norms and beliefs as well as health system attitudes and practices limited their capacity as community health workers. We outline potential mechanisms for improving ASHA capacity such as improved sensitization about religious, cultural, and gender norms; enhanced communication skills; and sensitization and advocating their work with health and state officials.

  3. Toward Intelligent Software Defect Detection

    NASA Technical Reports Server (NTRS)

    Benson, Markland J.

    2011-01-01

    Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.

  4. Divided attention limits perception of 3-D object shapes

    PubMed Central

    Scharff, Alec; Palmer, John; Moore, Cathleen M.

    2013-01-01

    Can one perceive multiple object shapes at once? We tested two benchmark models of object shape perception under divided attention: an unlimited-capacity and a fixed-capacity model. Under unlimited-capacity models, shapes are analyzed independently and in parallel. Under fixed-capacity models, shapes are processed at a fixed rate (as in a serial model). To distinguish these models, we compared conditions in which observers were presented with simultaneous or sequential presentations of a fixed number of objects (The extended simultaneous-sequential method: Scharff, Palmer, & Moore, 2011a, 2011b). We used novel physical objects as stimuli, minimizing the role of semantic categorization in the task. Observers searched for a specific object among similar objects. We ensured that non-shape stimulus properties such as color and texture could not be used to complete the task. Unpredictable viewing angles were used to preclude image-matching strategies. The results rejected unlimited-capacity models for object shape perception and were consistent with the predictions of a fixed-capacity model. In contrast, a task that required observers to recognize 2-D shapes with predictable viewing angles yielded an unlimited capacity result. Further experiments ruled out alternative explanations for the capacity limit, leading us to conclude that there is a fixed-capacity limit on the ability to perceive 3-D object shapes. PMID:23404158

  5. Visual Working Memory Capacity and Proactive Interference

    PubMed Central

    Hartshorne, Joshua K.

    2008-01-01

    Background Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Methodology/Principal Findings Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. Conclusions/Significance This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals. PMID:18648493

  6. Visual working memory capacity and proactive interference.

    PubMed

    Hartshorne, Joshua K

    2008-07-23

    Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.

  7. Systematic network coding for two-hop lossy transmissions

    NASA Astrophysics Data System (ADS)

    Li, Ye; Blostein, Steven; Chan, Wai-Yip

    2015-12-01

    In this paper, we consider network transmissions over a single or multiple parallel two-hop lossy paths. These scenarios occur in applications such as sensor networks or WiFi offloading. Random linear network coding (RLNC), where previously received packets are re-encoded at intermediate nodes and forwarded, is known to be a capacity-achieving approach for these networks. However, a major drawback of RLNC is its high encoding and decoding complexity. In this work, a systematic network coding method is proposed. We show through both analysis and simulation that the proposed method achieves higher end-to-end rate as well as lower computational cost than RLNC for finite field sizes and finite-sized packet transmissions.

  8. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  9. The ALFA (Activity Log Files Aggregation) Toolkit: A Method for Precise Observation of the Consultation

    PubMed Central

    2008-01-01

    Background There is a lack of tools to evaluate and compare Electronic patient record (EPR) systems to inform a rational choice or development agenda. Objective To develop a tool kit to measure the impact of different EPR system features on the consultation. Methods We first developed a specification to overcome the limitations of existing methods. We divided this into work packages: (1) developing a method to display multichannel video of the consultation; (2) code and measure activities, including computer use and verbal interactions; (3) automate the capture of nonverbal interactions; (4) aggregate multiple observations into a single navigable output; and (5) produce an output interpretable by software developers. We piloted this method by filming live consultations (n = 22) by 4 general practitioners (GPs) using different EPR systems. We compared the time taken and variations during coded data entry, prescribing, and blood pressure (BP) recording. We used nonparametric tests to make statistical comparisons. We contrasted methods of BP recording using Unified Modeling Language (UML) sequence diagrams. Results We found that 4 channels of video were optimal. We identified an existing application for manual coding of video output. We developed in-house tools for capturing use of keyboard and mouse and to time stamp speech. The transcript is then typed within this time stamp. Although we managed to capture body language using pattern recognition software, we were unable to use this data quantitatively. We loaded these observational outputs into our aggregation tool, which allows simultaneous navigation and viewing of multiple files. This also creates a single exportable file in XML format, which we used to develop UML sequence diagrams. In our pilot, the GP using the EMIS LV (Egton Medical Information Systems Limited, Leeds, UK) system took the longest time to code data (mean 11.5 s, 95% CI 8.7-14.2). Nonparametric comparison of EMIS LV with the other systems showed a significant difference, with EMIS PCS (Egton Medical Information Systems Limited, Leeds, UK) (P = .007), iSoft Synergy (iSOFT, Banbury, UK) (P = .014), and INPS Vision (INPS, London, UK) (P = .006) facilitating faster coding. In contrast, prescribing was fastest with EMIS LV (mean 23.7 s, 95% CI 20.5-26.8), but nonparametric comparison showed no statistically significant difference. UML sequence diagrams showed that the simplest BP recording interface was not the easiest to use, as users spent longer navigating or looking up previous blood pressures separately. Complex interfaces with free-text boxes left clinicians unsure of what to add. Conclusions The ALFA method allows the precise observation of the clinical consultation. It enables rigorous comparison of core elements of EPR systems. Pilot data suggests its capacity to demonstrate differences between systems. Its outputs could provide the evidence base for making more objective choices between systems. PMID:18812313

  10. 49 CFR 192.201 - Required capacity of pressure relieving and limiting stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Design of Pipeline Components § 192.201 Required capacity of pressure relieving and limiting stations. (a) Each pressure relief station or pressure limiting station or group of those stations installed to... part of the pipeline or distribution system in excess of those for which it was designed, or against...

  11. 75 FR 11171 - Notice of Filing of Several Pesticide Petitions for Residues of Pesticide Chemicals in or on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-10

    ... limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be... Environmental protection, Agricultural commodities, Feed additives, Food additives, Pesticides and pests...

  12. 75 FR 60452 - Notice of Filing of Several Pesticide Petitions for Residues of Pesticide Chemicals in or on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-30

    ... limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be... commodities, Feed additives, Food additives, Pesticides and pests, Reporting and recordkeeping requirements...

  13. Dual Coding, Reasoning and Fallacies.

    ERIC Educational Resources Information Center

    Hample, Dale

    1982-01-01

    Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)

  14. 76 FR 36479 - Receipt of a Pesticide Petition Filed for Residues of Pesticide Chemicals in or on Various...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be... commodities, Feed additives, Food additives, Pesticides and pests, Reporting and recordkeeping requirements...

  15. Users' Manual for Computer Code SPIRALI Incompressible, Turbulent Spiral Grooved Cylindrical and Face Seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.; Shapiro, Wilbur

    2005-01-01

    The SPIRALI code predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures. A derivation of the equations governing the performance of turbulent, incompressible, spiral groove cylindrical and face seals along with a description of their solution is given. The computer codes are described, including an input description, sample cases, and comparisons with results of other codes.

  16. Converting HAZUS capacity curves to seismic hazard-compatible building fragility functions: effect of hysteretic models

    USGS Publications Warehouse

    Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem

    2008-01-01

    A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.

  17. 76 FR 283 - International Fisheries; Pacific Tuna Fisheries; Vessel Capacity Limit in the Purse Seine Fishery...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-04

    ... vessel capacity limit of 158,000 cubic meters for all vessels authorized by the IATTC to fish for tuna... EPO of 31,775 cubic meters. When Resolution C-02-03 was adopted, the United States was authorized to have a total of 39,228 cubic meters of total well volume capacity in the purse seine fishery, as well...

  18. Limitless capacity: a dynamic object-oriented approach to short-term memory.

    PubMed

    Macken, Bill; Taylor, John; Jones, Dylan

    2015-01-01

    The notion of capacity-limited processing systems is a core element of cognitive accounts of limited and variable performance, enshrined within the short-term memory construct. We begin with a detailed critical analysis of the conceptual bases of this view and argue that there are fundamental problems - ones that go to the heart of cognitivism more generally - that render it untenable. In place of limited capacity systems, we propose a framework for explaining performance that focuses on the dynamic interplay of three aspects of any given setting: the particular task that must be accomplished, the nature and form of the material upon which the task must be performed, and the repertoire of skills and perceptual-motor functions possessed by the participant. We provide empirical examples of the applications of this framework in areas of performance typically accounted for by reference to capacity-limited short-term memory processes.

  19. Sustaining Open Source Communities through Hackathons - An Example from the ASPECT Community

    NASA Astrophysics Data System (ADS)

    Heister, T.; Hwang, L.; Bangerth, W.; Kellogg, L. H.

    2016-12-01

    The ecosystem surrounding a successful scientific open source software package combines both social and technical aspects. Much thought has been given to the technology side of writing sustainable software for large infrastructure projects and software libraries, but less about building the human capacity to perpetuate scientific software used in computational modeling. One effective format for building capacity is regular multi-day hackathons. Scientific hackathons bring together a group of science domain users and scientific software contributors to make progress on a specific software package. Innovation comes through the chance to work with established and new collaborations. Especially in the domain sciences with small communities, hackathons give geographically distributed scientists an opportunity to connect face-to-face. They foster lively discussions amongst scientists with different expertise, promote new collaborations, and increase transparency in both the technical and scientific aspects of code development. ASPECT is an open source, parallel, extensible finite element code to simulate thermal convection, that began development in 2011 under the Computational Infrastructure for Geodynamics. ASPECT hackathons for the past 3 years have grown the number of authors to >50, training new code maintainers in the process. Hackathons begin with leaders establishing project-specific conventions for development, demonstrating the workflow for code contributions, and reviewing relevant technical skills. Each hackathon expands the developer community. Over 20 scientists add >6,000 lines of code during the >1 week event. Participants grow comfortable contributing to the repository and over half continue to contribute afterwards. A high return rate of participants ensures continuity and stability of the group as well as mentoring for novice members. We hope to build other software communities on this model, but anticipate each to bring their own unique challenges.

  20. International codes and agreements to restrict the promotion of harmful products can hold lessons for the control of alcohol marketing.

    PubMed

    Landon, Jane; Lobstein, Tim; Godfrey, Fiona; Johns, Paula; Brookes, Chris; Jernigan, David

    2017-01-01

    Background and aims The 2011 UN Summit on Non-Communicable Disease failed to call for global action on alcohol marketing despite calls in the World Health Organization (WHO) Global Action Plan on Non-Communicable Diseases 2013-20 to restrict or ban alcohol advertising. In this paper we ask what it might take to match the global approach to tobacco enshrined in the Framework Convention on Tobacco Control (FCTC), and suggest that public health advocates can learn from the development of the FCTC and the Code of Marketing on infant formula milks and the recent recommendations on restricting food marketing to children. Methods Narrative review of qualitative accounts of the processes that created and monitor existing codes and treaties to restrict the marketing of consumer products, specifically breast milk substitutes, unhealthy foods and tobacco. Findings The development of treaties and codes for market restrictions include: (i) evidence of a public health crisis; (ii) the cost of inaction; (iii) civil society advocacy; (iv) the building of capacity; (v) the management of conflicting interests in policy development; and (vi) the need to consider monitoring and accountability to ensure compliance. Conclusion International public health treaties and codes provide an umbrella under which national governments can strengthen their own legislation, assisted by technical support from international agencies and non-governmental organizations. Three examples of international agreements, those for breast milk substitutes, unhealthy foods and tobacco, can provide lessons for the public health community to make progress on alcohol controls. Lessons include stronger alliances of advocates and health professionals and better tools and capacity to monitor and report current marketing practices and trends. © 2016 Society for the Study of Addiction.

  1. The impact of rural hospital closures on equity of commuting time for haemodialysis patients: simulation analysis using the capacity-distance model

    PubMed Central

    2012-01-01

    Background Frequent and long-term commuting is a requirement for dialysis patients. Accessibility thus affects their quality of lives. In this paper, a new model for accessibility measurement is proposed in which both geographic distance and facility capacity are taken into account. Simulation of closure of rural facilities and that of capacity transfer between urban and rural facilities are conducted to evaluate the impacts of these phenomena on equity of accessibility among dialysis patients. Methods Post code information as of August 2011 of all the 7,374 patients certified by municipalities of Hiroshima prefecture as having first or third grade renal disability were collected. Information on post code and the maximum number of outpatients (capacity) of all the 98 dialysis facilities were also collected. Using geographic information systems, patient commuting times were calculated in two models: one that takes into account road distance (distance model), and the other that takes into account both the road distance and facility capacity (capacity-distance model). Simulations of closures of rural and urban facilities were then conducted. Results The median commuting time among rural patients was more than twice as long as that among urban patients (15 versus 7 minutes, p < 0.001). In the capacity-distance model 36.1% of patients commuted to the facilities which were different from the facilities in the distance model, creating a substantial gap of commuting time between the two models. In the simulation, when five rural public facilitiess were closed, Gini coefficient of commuting times among the patients increased by 16%, indicating a substantial worsening of equity, and the number of patients with commuting times longer than 90 minutes increased by 72 times. In contrast, closure of four urban public facilities with similar capacities did not affect these values. Conclusions Closures of dialysis facilities in rural areas have a substantially larger impact on equity of commuting times among dialysis patients than closures of urban facilities. The accessibility simulations using thecapacity-distance model will provide an analytic framework upon which rational resource distribution policies might be planned. PMID:22824294

  2. Additive Classical Capacity of Quantum Channels Assisted by Noisy Entanglement.

    PubMed

    Zhuang, Quntao; Zhu, Elton Yechao; Shor, Peter W

    2017-05-19

    We give a capacity formula for the classical information transmission over a noisy quantum channel, with separable encoding by the sender and limited resources provided by the receiver's preshared ancilla. Instead of a pure state, we consider the signal-ancilla pair in a mixed state, purified by a "witness." Thus, the signal-witness correlation limits the resource available from the signal-ancilla correlation. Our formula characterizes the utility of different forms of resources, including noisy or limited entanglement assistance, for classical communication. With separable encoding, the sender's signals across multiple channel uses are still allowed to be entangled, yet our capacity formula is additive. In particular, for generalized covariant channels, our capacity formula has a simple closed form. Moreover, our additive capacity formula upper bounds the general coherent attack's information gain in various two-way quantum key distribution protocols. For Gaussian protocols, the additivity of the formula indicates that the collective Gaussian attack is the most powerful.

  3. Cardinality enhancement utilizing Sequential Algorithm (SeQ) code in OCDMA system

    NASA Astrophysics Data System (ADS)

    Fazlina, C. A. S.; Rashidi, C. B. M.; Rahman, A. K.; Aljunid, S. A.

    2017-11-01

    Optical Code Division Multiple Access (OCDMA) has been important with increasing demand for high capacity and speed for communication in optical networks because of OCDMA technique high efficiency that can be achieved, hence fibre bandwidth is fully used. In this paper we will focus on Sequential Algorithm (SeQ) code with AND detection technique using Optisystem design tool. The result revealed SeQ code capable to eliminate Multiple Access Interference (MAI) and improve Bit Error Rate (BER), Phase Induced Intensity Noise (PIIN) and orthogonally between users in the system. From the results, SeQ shows good performance of BER and capable to accommodate 190 numbers of simultaneous users contrast with existing code. Thus, SeQ code have enhanced the system about 36% and 111% of FCC and DCS code. In addition, SeQ have good BER performance 10-25 at 155 Mbps in comparison with 622 Mbps, 1 Gbps and 2 Gbps bit rate. From the plot graph, 155 Mbps bit rate is suitable enough speed for FTTH and LAN networks. Resolution can be made based on the superior performance of SeQ code. Thus, these codes will give an opportunity in OCDMA system for better quality of service in an optical access network for future generation's usage

  4. The association of color memory and the enumeration of multiple spatially overlapping sets.

    PubMed

    Poltoratski, Sonia; Xu, Yaoda

    2013-07-09

    Using dot displays, Halberda, Sires, and Feigenson (2006) showed that observers could simultaneously encode the numerosity of two spatially overlapping sets and the superset of all items at a glance. With the brief display and the masking used in Halberda et al., the task required observers to encode the colors of each set in order to select and enumerate all the dots in that set. As such, the observed capacity limit for set enumeration could reflect a limit in visual short-term memory (VSTM) capacity for the set color rather than a limit in set enumeration per se. Here, we largely replicated Halberda et al. and found successful enumeration of approximately two sets (the superset was not probed). We also found that only about two and a half colors could be remembered from the colored dot displays whether or not the enumeration task was performed concurrently with the color VSTM task. Because observers must remember the color of a set prior to enumerating it, the under three-item VSTM capacity for color necessarily dictates that set enumeration capacity in this paradigm could not exceed two sets. Thus, the ability to enumerate multiple spatially overlapping sets is likely limited by VSTM capacity to retain the discriminating feature of these sets. This relationship suggests that the capacity for set enumeration cannot be considered independently from the capacity for the set's defining features.

  5. Healthcare Utilization Monitoring System in Korea

    PubMed Central

    Shin, Hyun Chul; Lee, Youn Tae; Jo, Emmanuel C.

    2015-01-01

    Objectives It is important to monitor the healthcare utilization of patients at the national level to make evidence-based policy decisions and manage the nation's healthcare sector. The Health Insurance Review & Assessment Service (HIRA) has run a Healthcare Utilization Monitoring System (HUMS) since 2008. The objective of this paper is to introduce HIRA's HUMS. Methods This study described the HUMS's system structure, capacity, functionalities, and output formats run by HIRA in the Republic of Korea. Regarding output formats, this study extracted diabetes related health insurance claims through the HUMS from August 1, 2014 to May 31, 2015. Results The HUMS has kept records of health insurance claim data for 4 years. It has a 14-terabyte hardware capacity and employs several easy-to-use programs for maintenance of the system, such as MSTR, SAS, etc. Regarding functionalities, users should input diseases codes, target periods, facility types, and types of attributes, such as the number of healthcare utilizations or healthcare costs. It also has a functionality to predict healthcare utilization and costs. When this study extracted diabetes related data, it was found that the trend of healthcare costs for the treatment of diabetes and the number of patients with diabetes were increasing. Conclusions HIRA's HUMS works well to monitor healthcare utilization of patients at the national level. The HUMS has a high-capacity hardware infrastructure and several operational programs that allows easy access to summaries as well as details to identify contributing factors for abnormality, but it has a limitation in that there is often a time lag between the provision of healthcare to patients and the filing of health claims. PMID:26279955

  6. Bilingual First Language Acquisition: Exploring the Limits of the Language Faculty.

    ERIC Educational Resources Information Center

    Genesee, Fred

    2001-01-01

    Reviews current research in three domains of bilingual acquisition: pragmatic features of bilingual code mixing, grammatical constraints on child bilingual code mixing, and bilingual syntactic development. Examines implications from these domains for the understanding of the limits of the mental faculty to acquire language. (Author/VWL)

  7. What the success of brain imaging implies about the neural code.

    PubMed

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  8. Neural representation of objects in space: a dual coding account.

    PubMed Central

    Humphreys, G W

    1998-01-01

    I present evidence on the nature of object coding in the brain and discuss the implications of this coding for models of visual selective attention. Neuropsychological studies of task-based constraints on: (i) visual neglect; and (ii) reading and counting, reveal the existence of parallel forms of spatial representation for objects: within-object representations, where elements are coded as parts of objects, and between-object representations, where elements are coded as independent objects. Aside from these spatial codes for objects, however, the coding of visual space is limited. We are extremely poor at remembering small spatial displacements across eye movements, indicating (at best) impoverished coding of spatial position per se. Also, effects of element separation on spatial extinction can be eliminated by filling the space with an occluding object, indicating that spatial effects on visual selection are moderated by object coding. Overall, there are separate limits on visual processing reflecting: (i) the competition to code parts within objects; (ii) the small number of independent objects that can be coded in parallel; and (iii) task-based selection of whether within- or between-object codes determine behaviour. Between-object coding may be linked to the dorsal visual system while parallel coding of parts within objects takes place in the ventral system, although there may additionally be some dorsal involvement either when attention must be shifted within objects or when explicit spatial coding of parts is necessary for object identification. PMID:9770227

  9. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert; Israel, Daniel M.; Doebling, Scott William

    For code verification, one compares the code output against known exact solutions. There are many standard test problems used in this capacity, such as the Noh and Sedov problems. ExactPack is a utility that integrates many of these exact solution codes into a common API (application program interface), and can be used as a stand-alone code or as a python package. ExactPack consists of python driver scripts that access a library of exact solutions written in Fortran or Python. The spatial profiles of the relevant physical quantities, such as the density, fluid velocity, sound speed, or internal energy, are returnedmore » at a time specified by the user. The solution profiles can be viewed and examined by a command line interface or a graphical user interface, and a number of analysis tools and unit tests are also provided. We have documented the physics of each problem in the solution library, and provided complete documentation on how to extend the library to include additional exact solutions. ExactPack’s code architecture makes it easy to extend the solution-code library to include additional exact solutions in a robust, reliable, and maintainable manner.« less

  11. Tough2{_}MP: A parallel version of TOUGH2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris

    2003-04-09

    TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less

  12. Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination.

    PubMed

    Lin, Andrew C; Bygrave, Alexei M; de Calignon, Alix; Lee, Tzumin; Miesenböck, Gero

    2014-04-01

    Sparse coding may be a general strategy of neural systems for augmenting memory capacity. In Drosophila melanogaster, sparse odor coding by the Kenyon cells of the mushroom body is thought to generate a large number of precisely addressable locations for the storage of odor-specific memories. However, it remains untested how sparse coding relates to behavioral performance. Here we demonstrate that sparseness is controlled by a negative feedback circuit between Kenyon cells and the GABAergic anterior paired lateral (APL) neuron. Systematic activation and blockade of each leg of this feedback circuit showed that Kenyon cells activated APL and APL inhibited Kenyon cells. Disrupting the Kenyon cell-APL feedback loop decreased the sparseness of Kenyon cell odor responses, increased inter-odor correlations and prevented flies from learning to discriminate similar, but not dissimilar, odors. These results suggest that feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor specificity of memories.

  13. Health information management: an introduction to disease classification and coding.

    PubMed

    Mony, Prem Kumar; Nagaraj, C

    2007-01-01

    Morbidity and mortality data constitute an important component of a health information system and their coding enables uniform data collation and analysis as well as meaningful comparisons between regions or countries. Strengthening the recording and reporting systems for health monitoring is a basic requirement for an efficient health information management system. Increased advocacy for and awareness of a uniform coding system together with adequate capacity building of physicians, coders and other allied health and information technology personnel would pave the way for a valid and reliable health information management system in India. The core requirements for the implementation of disease coding are: (i) support from national/institutional health administrators, (ii) widespread availability of the ICD-10 material for morbidity and mortality coding; (iii) enhanced human and financial resources; and (iv) optimal use of informatics. We describe the methodology of a disease classification and codification system as also its applications for developing and maintaining an effective health information management system for India.

  14. Variable Coding and Modulation Experiment Using NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Tollis, Nicholas S.

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques in an operational system. The experimental nature of the Testbed allows for rapid demonstrations while using flight hardware in a deployed system within NASA's networks. One example is variable coding and modulation, which is a method to increase data-throughput in a communication link. This paper describes recent flight testing with variable coding and modulation over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Performance of the variable coding and modulation system is evaluated and compared to the capacity of the link, as well as standard NASA waveforms.

  15. [Assessment of Functioning when Conducting Occupational Capacity Evaluations--What is "Evidence-Based"?].

    PubMed

    Canela, Carlos; Schleifer, Roman; Dube, Anish; Hengartner, Michael P; Ebner, Gerhard; Seifritz, Erich; Liebrenz, Michael

    2016-03-01

    Occupational capacity evaluations have previously been subject to criticism for lacking in quality and consistency. To the authors' knowledge, there is no clear consensus on the best way to formally assess functioning within capacity evaluations. In this review we investigated different instruments that are used to assess functioning in occupational capacity evaluations. Systematic review of the literature. Though several instruments that assess functional capacity were found in our search, a specific validated instrument assessing occupational capacity as part of a larger psychiatric evaluation was not found. The limitations of the existing instruments on assessing functional capacity are discussed. Medical experts relying on instruments to conduct functional capacity evaluations should be cognizant of their limitations. The findings call for the development and use of an instrument specifically designed to assess the functional and occupational capacity of psychiatric patients, which is also likely to improve the quality of these reports. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Rotation Capacity of Bolted Flush End-Plate Stiffened Beam-to-Column Connection

    NASA Astrophysics Data System (ADS)

    Ostrowski, Krzysztof; Kozłowski, Aleksander

    2017-06-01

    One of the flexibility parameters of semi-rigid joints is rotation capacity. Plastic rotation capacity is especially important in plastic design of framed structures. Current design codes, including Eurocode 3, do not posses procedures enabling designers to obtain value of rotation capacity. In the paper the calculation procedure of the rotation capacity for stiffened bolted flush end-plate beam-to-column connections has been proposed. Theory of experiment design was applied with the use of Hartley's PS/DS-P:Ha3 plan. The analysis was performed with the use of finite element method (ANSYS), based on the numerical experiment plan. The determination of maximal rotation angle was carried out with the use of regression analysis. The main variables analyzed in parametric study were: pitch of the bolt "w" (120-180 mm), the distance between the bolt axis and the beam upper edge cg1 (50-90 mm) and the thickness of the end-plate tp (10-20 mm). Power function was proposed to describe available rotation capacity of the joint. Influence of the particular components on the rotation capacity was also investigated. In the paper a general procedure for determination of rotation capacity was proposed.

  17. Picornaviruses and nuclear functions: targeting a cellular compartment distinct from the replication site of a positive-strand RNA virus

    PubMed Central

    Flather, Dylan; Semler, Bert L.

    2015-01-01

    The compartmentalization of DNA replication and gene transcription in the nucleus and protein production in the cytoplasm is a defining feature of eukaryotic cells. The nucleus functions to maintain the integrity of the nuclear genome of the cell and to control gene expression based on intracellular and environmental signals received through the cytoplasm. The spatial separation of the major processes that lead to the expression of protein-coding genes establishes the necessity of a transport network to allow biomolecules to translocate between these two regions of the cell. The nucleocytoplasmic transport network is therefore essential for regulating normal cellular functioning. The Picornaviridae virus family is one of many viral families that disrupt the nucleocytoplasmic trafficking of cells to promote viral replication. Picornaviruses contain positive-sense, single-stranded RNA genomes and replicate in the cytoplasm of infected cells. As a result of the limited coding capacity of these viruses, cellular proteins are required by these intracellular parasites for both translation and genomic RNA replication. Being of messenger RNA polarity, a picornavirus genome can immediately be translated upon entering the cell cytoplasm. However, the replication of viral RNA requires the activity of RNA-binding proteins, many of which function in host gene expression, and are consequently localized to the nucleus. As a result, picornaviruses disrupt nucleocytoplasmic trafficking to exploit protein functions normally localized to a different cellular compartment from which they translate their genome to facilitate efficient replication. Furthermore, picornavirus proteins are also known to enter the nucleus of infected cells to limit host-cell transcription and down-regulate innate antiviral responses. The interactions of picornavirus proteins and host-cell nuclei are extensive, required for a productive infection, and are the focus of this review. PMID:26150805

  18. Injury in China: a systematic review of injury surveillance studies conducted in Chinese hospital emergency departments

    PubMed Central

    2011-01-01

    Background Injuries represent a significant and growing public health concern in China. This Review was conducted to document the characteristics of injured patients presenting to the emergency department of Chinese hospitals and to assess of the nature of information collected and reported in published surveillance studies. Methods A systematic search of MEDLINE and China Academic Journals supplemented with a hand search of journals was performed. Studies published in the period 1997 to 2007 were included and research published in Chinese was the focus. Search terms included emergency, injury, medical care. Results Of the 268 studies identified, 13 were injury surveillance studies set in the emergency department. Nine were collaborative studies of which eight were prospective studies. Of the five single centre studies only one was of a prospective design. Transport, falls and industrial injuries were common mechanisms of injury. Study strengths were large patient sample sizes and for the collaborative studies a large number of participating hospitals. There was however limited use of internationally recognised injury classification and severity coding indices. Conclusion Despite the limited number of studies identified, the scope of each highlights the willingness and the capacity to conduct surveillance studies in the emergency department. This Review highlights the need for the adoption of standardized injury coding indices in the collection and reporting of patient health data. While high level injury surveillance systems focus on population-based priority setting, this Review demonstrates the need to establish an internationally comparable trauma registry that would permit monitoring of the trauma system and would by extension facilitate the optimal care of the injured patient through the development of informed quality assurance programs and the implementation of evidence-based health policy. PMID:22029774

  19. Dynamics of social contagions with limited contact capacity.

    PubMed

    Wang, Wei; Shu, Panpan; Zhu, Yu-Xiao; Tang, Ming; Zhang, Yi-Cheng

    2015-10-01

    Individuals are always limited by some inelastic resources, such as time and energy, which restrict them to dedicate to social interaction and limit their contact capacities. Contact capacity plays an important role in dynamics of social contagions, which so far has eluded theoretical analysis. In this paper, we first propose a non-Markovian model to understand the effects of contact capacity on social contagions, in which each adopted individual can only contact and transmit the information to a finite number of neighbors. We then develop a heterogeneous edge-based compartmental theory for this model, and a remarkable agreement with simulations is obtained. Through theory and simulations, we find that enlarging the contact capacity makes the network more fragile to behavior spreading. Interestingly, we find that both the continuous and discontinuous dependence of the final adoption size on the information transmission probability can arise. There is a crossover phenomenon between the two types of dependence. More specifically, the crossover phenomenon can be induced by enlarging the contact capacity only when the degree exponent is above a critical degree exponent, while the final behavior adoption size always grows continuously for any contact capacity when degree exponent is below the critical degree exponent.

  20. Relating saturation capacity to charge density in strong cation exchangers.

    PubMed

    Steinebach, Fabian; Coquebert de Neuville, Bertrand; Morbidelli, Massimo

    2017-07-21

    In this work the relation between physical and chemical resin characteristics and the total amount of adsorbed protein (saturation capacity) for ion-exchange resins is discussed. Eleven different packing materials with a sulfo-functionalization and one multimodal resin were analyzed in terms of their porosity, pore size distribution, ligand density and binding capacity. By specifying the ligand density and binding capacity by the total and accessible surface area, two different groups of resins were identified: Below a ligand density of approx. 2.5μmol/m 2 area the ligand density controls the saturation capacity, while above this limit the accessible surface area becomes the limiting factor. This results in a maximum protein uptake of around 2.5mg/m 2 of accessible surface area. The obtained results allow estimating the saturation capacity from independent resin characteristics like the saturation capacity mainly depends on "library data" such as the accessible and total surface area and the charge density. Hence these results give an insight into the fundamentals of protein adsorption and help to find suitable resins, thus limiting the experimental effort in early process development stages. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. LRFD software for design and actual ultimate capacity of confined rectangular columns : [technical summary].

    DOT National Transportation Integrated Search

    2013-04-01

    Columns are considered the most critical elements in structures. The unconfined analysis for columns is well established in the literature. Structural design codes dictate reduction factors for safety. It wasnt until very recently that design spec...

  2. The capacity limitations of orientation summary statistics

    PubMed Central

    Attarha, Mouna; Moore, Cathleen M.

    2015-01-01

    The simultaneous–sequential method was used to test the processing capacity of establishing mean orientation summaries. Four clusters of oriented Gabor patches were presented in the peripheral visual field. One of the clusters had a mean orientation that was tilted either left or right while the mean orientations of the other three clusters were roughly vertical. All four clusters were presented at the same time in the simultaneous condition whereas the clusters appeared in temporal subsets of two in the sequential condition. Performance was lower when the means of all four clusters had to be processed concurrently than when only two had to be processed in the same amount of time. The advantage for establishing fewer summaries at a given time indicates that the processing of mean orientation engages limited-capacity processes (Experiment 1). This limitation cannot be attributed to crowding, low target-distractor discriminability, or a limited-capacity comparison process (Experiments 2 and 3). In contrast to the limitations of establishing multiple summary representations, establishing a single summary representation unfolds without interference (Experiment 4). When interpreted in the context of recent work on the capacity of summary statistics, these findings encourage reevaluation of the view that early visual perception consists of summary statistic representations that unfold independently across multiple areas of the visual field. PMID:25810160

  3. Structural design, analysis, and code evaluation of an odd-shaped pressure vessel

    NASA Astrophysics Data System (ADS)

    Rezvani, M. A.; Ziada, H. H.

    1992-12-01

    An effort to design, analyze, and evaluate a rectangular pressure vessel is described. Normally pressure vessels are designed in circular or spherical shapes to prevent stress concentrations. In this case, because of operational limitations, the choice of vessels was limited to a rectangular pressure box with a removable cover plate. The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code is used as a guideline for pressure containments whose width or depth exceeds 15.24 cm (6.0 in.) and where pressures will exceed 103.4 KPa (15.0 lbf/in(sup 2)). This evaluation used Section 8 of this Code, hereafter referred to as the Code. The dimensions and working pressure of the subject vessel fall within the pressure vessel category of the Code. The Code design guidelines and rules do not directly apply to this vessel. Therefore, finite-element methodology was used to analyze the pressure vessel, and the Code then was used in qualifying the vessel to be stamped to the Code. Section 8, Division 1 of the Code was used for evaluation. This action was justified by selecting a material for which fatigue damage would not be a concern. The stress analysis results were then checked against the Code, and the thicknesses adjusted to satisfy Code requirements. Although not directly applicable, the Code design formulas for rectangular vessels were also considered and presented.

  4. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  5. 70% efficiency of bistate molecular machines explained by information theory, high dimensional geometry and evolutionary convergence.

    PubMed

    Schneider, Thomas D

    2010-10-01

    The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is [Formula in text] joules per bit (kB is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2=0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal.

  6. 70% efficiency of bistate molecular machines explained by information theory, high dimensional geometry and evolutionary convergence

    PubMed Central

    Schneider, Thomas D.

    2010-01-01

    The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is joules per bit ( is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln 2 = 0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal. PMID:20562221

  7. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    NASA Astrophysics Data System (ADS)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA is part of the NuGrid chemical evolution package and is publicly available online at http://nugrid.github.io/NuPyCEE.

  8. Field Validation of the Stability Limit of a Multi MW Turbine

    NASA Astrophysics Data System (ADS)

    Kallesøe, Bjarne S.; Kragh, Knud A.

    2016-09-01

    Long slender blades of modern multi-megawatt turbines exhibit a flutter like instability at rotor speeds above a critical rotor speed. Knowing the critical rotor speed is crucial to a safe turbine design. The flutter like instability can only be estimated using geometrically non-linear aeroelastic codes. In this study, the estimated rotor speed stability limit of a 7 MW state of the art wind turbine is validated experimentally. The stability limit is estimated using Siemens Wind Powers in-house aeroelastic code, and the results show that the predicted stability limit is within 5% of the experimentally observed limit.

  9. Towards a European code of medical ethics. Ethical and legal issues.

    PubMed

    Patuzzo, Sara; Pulice, Elisabetta

    2017-01-01

    The feasibility of a common European code of medical ethics is discussed, with consideration and evaluation of the difficulties such a project is going to face, from both the legal and ethical points of view. On the one hand, the analysis will underline the limits of a common European code of medical ethics as an instrument for harmonising national professional rules in the European context; on the other hand, we will highlight some of the potentials of this project, which could be increased and strengthened through a proper rulemaking process and through adequate and careful choice of content. We will also stress specific elements and devices that should be taken into consideration during the establishment of the code, from both procedural and content perspectives. Regarding methodological issues, the limits and potentialities of a common European code of medical ethics will be analysed from an ethical point of view and then from a legal perspective. The aim of this paper is to clarify the framework for the potential but controversial role of the code in the European context, showing the difficulties in enforcing and harmonising national ethical rules into a European code of medical ethics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements

    DOE PAGES

    Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.

    2017-02-01

    Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. In this paper, we report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. Finally, we demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ± 0.018, and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.

  11. Building locally relevant ethics curricula for nursing education in Botswana.

    PubMed

    Barchi, F; Kasimatis Singleton, M; Magama, M; Shaibu, S

    2014-12-01

    The goal of this multi-institutional collaboration was to develop an innovative, locally relevant ethics curriculum for nurses in Botswana. Nurses in Botswana face ethical challenges that are compounded by lack of resources, pressures to handle tasks beyond training or professional levels, workplace stress and professional isolation. Capacity to teach nursing ethics in the classroom and in professional practice settings has been limited. A pilot curriculum, including cases set in local contexts, was tested with nursing faculty in Botswana in 2012. Thirty-three per cent of the faculty members indicated they would be more comfortable teaching ethics. A substantial number of faculty members were more likely to introduce the International Council of Nurses Code of Ethics in teaching, practice and mentoring as a result of the training. Based on evaluation data, curricular materials were developed using the Code and the regulatory requirements for nursing practice in Botswana. A web-based repository of sample lectures, discussion cases and evaluation rubrics was created to support the use of the materials. A new master degree course, Nursing Ethics in Practice, has been proposed for fall 2015 at the University of Botswana. The modular nature of the materials and the availability of cases set within the context of clinical nurse practice in Botswana make them readily adaptable to various student academic levels and continuing professional development programmes. The ICN Code of Ethics for Nursing is a valuable teaching tool in developing countries when taught using locally relevant case materials and problem-based teaching methods. The approach used in the development of a locally relevant nursing ethics curriculum in Botswana can serve as a model for nursing education and continuing professional development programmes in other sub-Saharan African countries to enhance use of the ICN Code of Ethics in nursing practice. © 2014 International Council of Nurses.

  12. MO-F-CAMPUS-T-05: SQL Database Queries to Determine Treatment Planning Resource Usage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, C; Gladstone, D

    2015-06-15

    Purpose: A radiation oncology clinic’s treatment capacity is traditionally thought to be limited by the number of machines in the clinic. As the number of fractions per course decrease and the number of adaptive plans increase, the question of how many treatment plans a clinic can plan becomes increasingly important. This work seeks to lay the ground work for assessing treatment planning resource usage. Methods: Care path templates were created using the Aria 11 care path interface. Care path tasks included key steps in the treatment planning process from the completion of CT simulation through the first radiation treatment. SQLmore » Server Management Studio was used to run SQL queries to extract task completion time stamps along with care path template information and diagnosis codes from the Aria database. 6 months of planning cycles were evaluated. Elapsed time was evaluated in terms of work hours within Monday – Friday, 7am to 5pm. Results: For the 195 validated treatment planning cycles, the average time for planning and MD review was 22.8 hours. Of those cases 33 were categorized as urgent. The average planning time for urgent plans was 5 hours. A strong correlation between diagnosis code and range of elapsed planning time was as well as between elapsed time and select diagnosis codes was observed. It was also observed that tasks were more likely to be completed on the date due than the time that they were due. Follow-up confirmed that most users did not look at the due time. Conclusion: Evaluation of elapsed planning time and other tasks suggest that care paths should be adjusted to allow for different contouring and planning times for certain diagnosis codes and urgent cases. Additional clinic training around task due times vs dates or a structuring of care paths around due dates is also needed.« less

  13. Biological Information Transfer Beyond the Genetic Code: The Sugar Code

    NASA Astrophysics Data System (ADS)

    Gabius, H.-J.

    In the era of genetic engineering, cloning, and genome sequencing the focus of research on the genetic code has received an even further accentuation in the public eye. In attempting, however, to understand intra- and intercellular recognition processes comprehensively, the two biochemical dimensions established by nucleic acids and proteins are not sufficient to satisfactorily explain all molecular events in, for example, cell adhesion or routing. The consideration of further code systems is essential to bridge this gap. A third biochemical alphabet forming code words with an information storage capacity second to no other substance class in rather small units (words, sentences) is established by monosaccharides (letters). As hardware oligosaccharides surpass peptides by more than seven orders of magnitude in the theoretical ability to build isomers, when the total of conceivable hexamers is calculated. In addition to the sequence complexity, the use of magnetic resonance spectroscopy and molecular modeling has been instrumental in discovering that even small glycans can often reside in not only one but several distinct low-energy conformations (keys). Intriguingly, conformers can display notably different capacities to fit snugly into the binding site of nonhomologous receptors (locks). This process, experimentally verified for two classes of lectins, is termed "differential conformer selection." It adds potential for shifts of the conformer equilibrium to modulate ligand properties dynamically and reversibly to the well-known changes in sequence (including anomeric positioning and linkage points) and in pattern of substitution, for example, by sulfation. In the intimate interplay with sugar receptors (lectins, enzymes, and antibodies) the message of coding units of the sugar code is deciphered. Their recognition will trigger postbinding signaling and the intended biological response. Knowledge about the driving forces for the molecular rendezvous, i.e., contributions of bidentate or cooperative hydrogen bonds, dispersion forces, stacking, and solvent rearrangement, will enable the design of high-affinity ligands or mimetics thereof. They embody clinical applications reaching from receptor localization in diagnostic pathology to cell type-selective targeting of drugs and inhibition of undesired cell adhesion in bacterial/viral infections, inflammation, or metastasis.

  14. User's manual for the BNW-I optimization code for dry-cooled power plants. Volume I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.

    1977-01-01

    This User's Manual provides information on the use and operation of three versions of BNW-I, a computer code developed by Battelle, Pacific Northwest Laboratory (PNL) as a part of its activities under the ERDA Dry Cooling Tower Program. These three versions of BNW-I were used as reported elsewhere to obtain comparative incremental costs of electrical power production by two advanced concepts (one using plastic heat exchangers and one using ammonia as an intermediate heat transfer fluid) and a state-of-the-art system. The computer program offers a comprehensive method of evaluating the cost savings potential of dry-cooled heat rejection systems and componentsmore » for power plants. This method goes beyond simple ''figure-of-merit'' optimization of the cooling tower and includes such items as the cost of replacement capacity needed on an annual basis and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence, the BNW-I code is a useful tool for determining potential cost savings of new heat transfer surfaces, new piping or other components as part of an optimized system for a dry-cooled power plant.« less

  15. Transcriptional landscapes of Axolotl (Ambystoma mexicanum).

    PubMed

    Caballero-Pérez, Juan; Espinal-Centeno, Annie; Falcon, Francisco; García-Ortega, Luis F; Curiel-Quesada, Everardo; Cruz-Hernández, Andrés; Bako, Laszlo; Chen, Xuemei; Martínez, Octavio; Alberto Arteaga-Vázquez, Mario; Herrera-Estrella, Luis; Cruz-Ramírez, Alfredo

    2018-01-15

    The axolotl (Ambystoma mexicanum) is the vertebrate model system with the highest regeneration capacity. Experimental tools established over the past 100 years have been fundamental to start unraveling the cellular and molecular basis of tissue and limb regeneration. In the absence of a reference genome for the Axolotl, transcriptomic analysis become fundamental to understand the genetic basis of regeneration. Here we present one of the most diverse transcriptomic data sets for Axolotl by profiling coding and non-coding RNAs from diverse tissues. We reconstructed a population of 115,906 putative protein coding mRNAs as full ORFs (including isoforms). We also identified 352 conserved miRNAs and 297 novel putative mature miRNAs. Systematic enrichment analysis of gene expression allowed us to identify tissue-specific protein-coding transcripts. We also found putative novel and conserved microRNAs which potentially target mRNAs which are reported as important disease candidates in heart and liver. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. STELLTRANS: A Transport Analysis Suite for Stellarators

    NASA Astrophysics Data System (ADS)

    Mittelstaedt, Joseph; Lazerson, Samuel; Pablant, Novimir; Weir, Gavin; W7-X Team

    2016-10-01

    The stellarator transport code STELLTRANS allows us to better analyze the power balance in W7-X. Although profiles of temperature and density are measured experimentally, geometrical factors are needed in conjunction with these measurements to properly analyze heat flux densities in stellarators. The STELLTRANS code interfaces with VMEC to find an equilibrium flux surface configuration and with TRAVIS to determine the RF heating and current drive in the plasma. Stationary transport equations are then considered which are solved using a boundary value differential equation solver. The equations and quantities considered are averaged over flux surfaces to reduce the system to an essentially one dimensional problem. We have applied this code to data from W-7X and were able to calculate the heat flux coefficients. We will also present extensions of the code to a predictive capacity which would utilize DKES to find neoclassical transport coefficients to update the temperature and density profiles.

  17. New reactor cavity cooling system having passive safety features using novel shape for HTGRs and VHTRs

    DOE PAGES

    Takamatsu, Kuniyoshi; Hu, Rui

    2014-11-27

    A new, highly efficient reactor cavity cooling system (RCCS) with passive safety features without a requirement for electricity and mechanical drive is proposed for high temperature gas cooled reactors (HTGRs) and very high temperature reactors (VHTRs). The RCCS design consists of continuous closed regions; one is an ex-reactor pressure vessel (RPV) region and another is a cooling region having heat transfer area to ambient air assumed at 40 (°C). The RCCS uses a novel shape to efficiently remove the heat released from the RPV with radiation and natural convection. Employing the air as the working fluid and the ambient airmore » as the ultimate heat sink, the new RCCS design strongly reduces the possibility of losing the heat sink for decay heat removal. Therefore, HTGRs and VHTRs adopting the new RCCS design can avoid core melting due to overheating the fuels. The simulation results from a commercial CFD code, STAR-CCM+, show that the temperature distribution of the RCCS is within the temperature limits of the structures, such as the maximum operating temperature of the RPV, 713.15 (K) = 440 (°C), and the heat released from the RPV could be removed safely, even during a loss of coolant accident (LOCA). Finally, when the RCCS can remove 600 (kW) of the rated nominal state even during LOCA, the safety review for building the HTTR could confirm that the temperature distribution of the HTTR is within the temperature limits of the structures to secure structures and fuels after the shutdown because the large heat capacity of the graphite core can absorb heat from the fuel in a short period. Therefore, the capacity of the new RCCS design would be sufficient for decay heat removal.« less

  18. The Need for Vendor Source Code at NAS. Revised

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Acheson, Steve; Blaylock, Bruce; Brock, David; Cardo, Nick; Ciotti, Bob; Poston, Alan; Wong, Parkson; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The Numerical Aerodynamic Simulation (NAS) Facility has a long standing practice of maintaining buildable source code for installed hardware. There are two reasons for this: NAS's designated pathfinding role, and the need to maintain a smoothly running operational capacity given the widely diversified nature of the vendor installations. NAS has a need to maintain support capabilities when vendors are not able; diagnose and remedy hardware or software problems where applicable; and to support ongoing system software development activities whether or not the relevant vendors feel support is justified. This note provides an informal history of these activities at NAS, and brings together the general principles that drive the requirement that systems integrated into the NAS environment run binaries built from source code, onsite.

  19. Nitrogen-source preference in blueberry (Vaccinium sp.): Enhanced shoot nitrogen assimilation in response to direct supply of nitrate.

    PubMed

    Alt, Douglas S; Doyle, John W; Malladi, Anish

    2017-09-01

    Blueberry (Vaccinium sp.) is thought to display a preference for the ammonium (NH 4 + ) form over the nitrate (NO 3 - ) form of inorganic nitrogen (N). This N-source preference has been associated with a generally low capacity to assimilate the NO 3 - form of N, especially within the shoot tissues. Nitrate assimilation is mediated by nitrate reductase (NR), a rate limiting enzyme that converts NO 3 - to nitrite (NO 2 - ). We investigated potential limitations of NO 3 - assimilation in two blueberry species, rabbiteye (Vaccinium ashei) and southern highbush (Vaccinium corymbosum) by supplying NO 3 - to the roots, leaf surface, or through the cut stem. Both species displayed relatively low but similar root uptake rates for both forms of inorganic N. Nitrate uptake through the roots transiently increased NR activity by up to 3.3-fold and root NR gene expression by up to 4-fold. However, supplying NO 3 - to the roots did not increase its transport in the xylem, nor did it increase NR activity in the leaves, indicating that the acquired N was largely assimilated or stored within the roots. Foliar application of NO 3 - increased leaf NR activity by up to 3.5-fold, but did not alter NO 3 - metabolism-related gene expression, suggesting that blueberries are capable of post translational regulation of NR activity in the shoots. Additionally, supplying NO 3 - to the cut ends of stems resulted in around a 5-fold increase in NR activity, a 10-fold increase in NR transcript accumulation, and up to a 195-fold increase in transcript accumulation of NITRITE REDUCTASE (NiR1) which codes for the enzyme catalyzing the conversion of NO 2 - to NH 4 + . These data indicate that blueberry shoots are capable of assimilating NO 3 - when it is directly supplied to these tissues. Together, these data suggest that limitations in the uptake and translocation of NO 3 - to the shoots may limit overall NO 3 - assimilation capacity in blueberry. Copyright © 2017 Elsevier GmbH. All rights reserved.

  20. Categories of Code-Switching in Hispanic Communities: Untangling the Terminology. Sociolinguistic Working Paper Number 76.

    ERIC Educational Resources Information Center

    Baker, Opal Ruth

    Research on Spanish/English code switching is reviewed and the definitions and categories set up by the investigators are examined. Their methods of locating, limiting, and classifying true code switches, and the terms used and results obtained, are compared. It is found that in these studies, conversational (intra-discourse) code switching is…

  1. 75 FR 19261 - Alkyl (C12-C16) Dimethyl Ammonio Acetate; Exemption From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-14

    ..., it gave a negative response for skin sensitization in in vivo guinea pigs as determined by Magnusson.... Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532...

  2. 75 FR 42318 - Poly(oxy-1,2-ethanediyl), α-isotridecyl-ω-methoxy; Exemption from the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-21

    ....2600) in guinea pigs showed skin sensitization when exposed to poly(oxy-1,2-ethanediyl), [alpha.... Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532...

  3. A Proposal for the Maximum KIC for Use in ASME Code Flaw and Fracture Toughness Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirk, Mark; Stevens, Gary; Erickson, Marjorie A

    2011-01-01

    Nonmandatory Appendices A [1] and G [2] of Section XI of the ASME Code use the KIc curve (indexed to the material reference transition temperature, RTNDT) in reactor pressure vessel (RPV) flaw evaluations, and for the purpose of establishing RPV pressure-temperature (P-T) limits. Neither of these appendices places an upper-limit on the KIc value that may be used in these assessments. Over the years, it has often been suggested by some of the members of the ASME Section XI Code committees that are responsible for maintaining Appendices A and G that there is a practical upper limit of 200 ksimore » in (220 MPa m) [4]. This upper limit is not well recognized by all users of the ASME Code, is not explicitly documented within the Code itself, and the one source known to the authors where it is defended [4] relies on data that is either in error, or is less than 220 MPa m. However, as part of the NRC/industry pressurized thermal shock (PTS) re-evaluation effort, empirical models were developed that propose common temperature dependencies for all ferritic steels operating on the upper shelf. These models relate the fracture toughness properties in the transition regime to those on the upper shelf and, combined with data for a wide variety of RPV steels and welds on which they are based, suggest that the practical upper limit of 220 MPa m exceeds the upper shelf fracture toughness of most RPV steels by a considerable amount, especially for irradiated steels. In this paper, available models and data are used to propose upper bound limits of applicability on the KIc curve for use in ASME Code, Section XI, Nonmandatory Appendices A and G evaluations that are consistent with available data for RPV steels.« less

  4. Comparative genomic and plasmid analysis of beer-spoiling and non-beer-spoiling Lactobacillus brevis isolates.

    PubMed

    Bergsveinson, Jordyn; Ziola, Barry

    2017-12-01

    Beer-spoilage-related lactic acid bacteria (BSR LAB) belong to multiple genera and species; however, beer-spoilage capacity is isolate-specific and partially acquired via horizontal gene transfer within the brewing environment. Thus, the extent to which genus-, species-, or environment- (i.e., brewery-) level genetic variability influences beer-spoilage phenotype is unknown. Publicly available Lactobacillus brevis genomes were analyzed via BlAst Diagnostic Gene findEr (BADGE) for BSR genes and assessed for pangenomic relationships. Also analyzed were functional coding capacities of plasmids of LAB inhabiting extreme niche environments. Considerable genetic variation was observed in L. brevis isolated from clinical samples, whereas 16 candidate genes distinguish BSR and non-BSR L. brevis genomes. These genes are related to nutrient scavenging of gluconate or pentoses, mannose, and metabolism of pectin. BSR L. brevis isolates also have higher average nucleotide identity and stronger pangenome association with one another, though isolation source (i.e., specific brewery) also appears to influence the plasmid coding capacity of BSR LAB. Finally, it is shown that niche-specific adaptation and phenotype are plasmid-encoded for both BSR and non-BSR LAB. The ultimate combination of plasmid-encoded genes dictates the ability of L. brevis to survive in the most extreme beer environment, namely, gassed (i.e., pressurized) beer.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradler, Kamil; Hayden, Patrick; Touchette, Dave

    Coding theorems in quantum Shannon theory express the ultimate rates at which a sender can transmit information over a noisy quantum channel. More often than not, the known formulas expressing these transmission rates are intractable, requiring an optimization over an infinite number of uses of the channel. Researchers have rarely found quantum channels with a tractable classical or quantum capacity, but when such a finding occurs, it demonstrates a complete understanding of that channel's capabilities for transmitting classical or quantum information. Here we show that the three-dimensional capacity region for entanglement-assisted transmission of classical and quantum information is tractable formore » the Hadamard class of channels. Examples of Hadamard channels include generalized dephasing channels, cloning channels, and the Unruh channel. The generalized dephasing channels and the cloning channels are natural processes that occur in quantum systems through the loss of quantum coherence or stimulated emission, respectively. The Unruh channel is a noisy process that occurs in relativistic quantum information theory as a result of the Unruh effect and bears a strong relationship to the cloning channels. We give exact formulas for the entanglement-assisted classical and quantum communication capacity regions of these channels. The coding strategy for each of these examples is superior to a naieve time-sharing strategy, and we introduce a measure to determine this improvement.« less

  6. High-Capacity Communications from Martian Distances

    NASA Technical Reports Server (NTRS)

    Williams, W. Dan; Collins, Michael; Hodges, Richard; Orr, Richard S.; Sands, O. Scott; Schuchman, Leonard; Vyas, Hemali

    2007-01-01

    High capacity communications from Martian distances, required for the envisioned human exploration and desirable for data-intensive science missions, is challenging. NASA s Deep Space Network currently requires large antennas to close RF telemetry links operating at kilobit-per-second data rates. To accommodate higher rate communications, NASA is considering means to achieve greater effective aperture at its ground stations. This report, focusing on the return link from Mars to Earth, demonstrates that without excessive research and development expenditure, operational Mars-to-Earth RF communications systems can achieve data rates up to 1 Gbps by 2020 using technology that today is at technology readiness level (TRL) 4-5. Advanced technology to achieve the needed increase in spacecraft power and transmit aperture is feasible at an only moderate increase in spacecraft mass and technology risk. In addition, both power-efficient, near-capacity coding and modulation and greater aperture from the DSN array will be required. In accord with these results and conclusions, investment in the following technologies is recommended:(1) lightweight (1 kg/sq m density) spacecraft antenna systems; (2) a Ka-band receive ground array consisting of relatively small (10-15 m) antennas; (3) coding and modulation technology that reduces spacecraft power by at least 3 dB; and (4) efficient generation of kilowatt-level spacecraft RF power.

  7. Reimbursement Policies for Carotid Duplex Ultrasound that are Based on International Classification of Diseases Codes May Discourage Testing in High-Yield Groups.

    PubMed

    Go, Michael R; Masterson, Loren; Veerman, Brent; Satiani, Bhagwan

    2016-02-01

    To curb increasing volumes of diagnostic imaging and costs, reimbursement for carotid duplex ultrasound (CDU) is dependent on "appropriate" indications as documented by International Classification of Diseases (ICD) codes entered by ordering physicians. Historically, asymptomatic indications for CDU yield lower rates of abnormal results than symptomatic indications, and consensus documents agree that most asymptomatic indications for CDU are inappropriate. In our vascular laboratory, we perceived an increased rate of incorrect or inappropriate ICD codes. We therefore sought to determine if ICD codes were useful in predicting the frequency of abnormal CDU. We hypothesized that asymptomatic or nonspecific ICD codes would yield a lower rate of abnormal CDU than symptomatic codes, validating efforts to limit reimbursement in asymptomatic, low-yield groups. We reviewed all outpatient CDU done in 2011 at our institution. ICD codes were recorded, and each medical record was then reviewed by a vascular surgeon to determine if the assigned ICD code appropriately reflected the clinical scenario. CDU findings categorized as abnormal (>50% stenosis) or normal (<50% stenosis) were recorded. Each individual ICD code and group 1 (asymptomatic), group 2 (nonhemispheric symptoms), group 3 (hemispheric symptoms), group 4 (preoperative cardiovascular examination), and group 5 (nonspecific) ICD codes were analyzed for correlation with CDU results. Nine hundred ninety-four patients had 74 primary ICD codes listed as indications for CDU. Of assigned ICD codes, 17.4% were deemed inaccurate. Overall, 14.8% of CDU were abnormal. Of the 13 highest frequency ICD codes, only 433.10, an asymptomatic code, was associated with abnormal CDU. Four symptomatic codes were associated with normal CDU; none of the other high frequency codes were associated with CDU result. Patients in group 1 (asymptomatic) were significantly more likely to have an abnormal CDU compared to each of the other groups (P < 0.001, P < 0.001, P = 0.020, P = 0.002) and to all other groups combined (P < 0.001). Asymptomatic indications by ICD codes yielded higher rates of abnormal CDU than symptomatic indications. This finding is inconsistent with clinical experience and historical data, and we suggest that inaccurate coding may play a role. Limiting reimbursement for CDU in low-yield groups is reasonable. However, reimbursement policies based on ICD coding, for example, limiting payment for asymptomatic ICD codes, may impede use of CDU in high-yield patient groups. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  9. Complete plastid genome sequence of Daucus carota: implications for biotechnology and phylogeny of angiosperms.

    PubMed

    Ruhlman, Tracey; Lee, Seung-Bum; Jansen, Robert K; Hostetler, Jessica B; Tallon, Luke J; Town, Christopher D; Daniell, Henry

    2006-08-31

    Carrot (Daucus carota) is a major food crop in the US and worldwide. Its capacity for storage and its lifecycle as a biennial make it an attractive species for the introduction of foreign genes, especially for oral delivery of vaccines and other therapeutic proteins. Until recently efforts to express recombinant proteins in carrot have had limited success in terms of protein accumulation in the edible tap roots. Plastid genetic engineering offers the potential to overcome this limitation, as demonstrated by the accumulation of BADH in chromoplasts of carrot taproots to confer exceedingly high levels of salt resistance. The complete plastid genome of carrot provides essential information required for genetic engineering. Additionally, the sequence data add to the rapidly growing database of plastid genomes for assessing phylogenetic relationships among angiosperms. The complete carrot plastid genome is 155,911 bp in length, with 115 unique genes and 21 duplicated genes within the IR. There are four ribosomal RNAs, 30 distinct tRNA genes and 18 intron-containing genes. Repeat analysis reveals 12 direct and 2 inverted repeats > or = 30 bp with a sequence identity > or = 90%. Phylogenetic analysis of nucleotide sequences for 61 protein-coding genes using both maximum parsimony (MP) and maximum likelihood (ML) were performed for 29 angiosperms. Phylogenies from both methods provide strong support for the monophyly of several major angiosperm clades, including monocots, eudicots, rosids, asterids, eurosids II, euasterids I, and euasterids II. The carrot plastid genome contains a number of dispersed direct and inverted repeats scattered throughout coding and non-coding regions. This is the first sequenced plastid genome of the family Apiaceae and only the second published genome sequence of the species-rich euasterid II clade. Both MP and ML trees provide very strong support (100% bootstrap) for the sister relationship of Daucus with Panax in the euasterid II clade. These results provide the best taxon sampling of complete chloroplast genomes and the strongest support yet for the sister relationship of Caryophyllales to the asterids. The availability of the complete plastid genome sequence should facilitate improved transformation efficiency and foreign gene expression in carrot through utilization of endogenous flanking sequences and regulatory elements.

  10. Some conservative estimates in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2006-08-15

    Relationship is established between the security of the BB84 quantum key distribution protocol and the forward and converse coding theorems for quantum communication channels. The upper bound Q{sub c} {approx} 11% on the bit error rate compatible with secure key distribution is determined by solving the transcendental equation H(Q{sub c})=C-bar({rho})/2, where {rho} is the density matrix of the input ensemble, C-bar({rho}) is the classical capacity of a noiseless quantum channel, and H(Q) is the capacity of a classical binary symmetric channel with error rate Q.

  11. Context-Sensitive Ethics in School Psychology

    ERIC Educational Resources Information Center

    Lasser, Jon; Klose, Laurie McGarry; Robillard, Rachel

    2013-01-01

    Ethical codes and licensing rules provide foundational guidance for practicing school psychologists, but these sources fall short in their capacity to facilitate effective decision-making. When faced with ethical dilemmas, school psychologists can turn to decision-making models, but step-wise decision trees frequently lack the situation…

  12. 49 CFR 571.120 - Tire selection and rims and motor home/recreation vehicle trailer load carrying capacity...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... industry or manufacturer's designation for a rim by style or code. Weather side means the surface area of... CFR 1.50) [42 FR 7144, Feb. 7, 1977] Editorial Note: For Federal Register citations affecting § 571...

  13. Strengths and limitations of the NATALI code for aerosol typing from multiwavelength Raman lidar observations

    NASA Astrophysics Data System (ADS)

    Nicolae, Doina; Talianu, Camelia; Vasilescu, Jeni; Nicolae, Victor; Stachlewska, Iwona S.

    2018-04-01

    A Python code was developed to automatically retrieve the aerosol type (and its predominant component in the mixture) from EARLINET's 3 backscatter and 2 extinction data. The typing relies on Artificial Neural Networks which are trained to identify the most probable aerosol type from a set of mean-layer intensive optical parameters. This paper presents the use and limitations of the code with respect to the quality of the inputed lidar profiles, as well as with the assumptions made in the aerosol model.

  14. Electrical generating unit inventory, 1976-1986: Illinois, Indiana, Kentucky, Ohio, Pennsylvania and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, S.D.

    1981-09-01

    The report was prepared as part of the Ohio River Basin Energy Study (ORBES), a multidisciplinary policy research program. The ORBES region consists of all of Kentucky, most of West Virginia, substantial parts of Illinois, Indiana, and Ohio, and southwestern Pennsylvania. The inventory lists installed electrical generating capacity in commercial service as of December 1, 1976, and scheduled capacity additions and removals between 1977 and 1986 in the six ORBES states (Illinois, Indiana, Kentucky, Ohio, Pennsylvania, and West Virginia). The following information is included for each electrical generating unit: unit ID code, company index, whether joint or industrial ownership, plantmore » name, whether inside or outside the ORBES region, FIPS county code, type of unit, size in megawatts, type of megawatt rating, status of unit, date of commercial operation (actual or scheduled), scheduled retirement date (if any), primary fuel, alternate fuel, type of cooling, source of cooling water, and source of information.« less

  15. The Focus of Spatial Attention Determines the Number and Precision of Face Representations in Working Memory.

    PubMed

    Towler, John; Kelly, Maria; Eimer, Martin

    2016-06-01

    The capacity of visual working memory for faces is extremely limited, but the reasons for these limitations remain unknown. We employed event-related brain potential measures to demonstrate that individual faces have to be focally attended in order to be maintained in working memory, and that attention is allocated to only a single face at a time. When 2 faces have to be memorized simultaneously in a face identity-matching task, the focus of spatial attention during encoding predicts which of these faces can be successfully maintained in working memory and matched to a subsequent test face. We also show that memory representations of attended faces are maintained in a position-dependent fashion. These findings demonstrate that the limited capacity of face memory is directly linked to capacity limits of spatial attention during the encoding and maintenance of individual face representations. We suggest that the capacity and distribution of selective spatial attention is a dynamic resource that constrains the capacity and fidelity of working memory for faces. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Quantity not quality: The relationship between fluid intelligence and working memory capacity

    PubMed Central

    Fukuda, Keisuke; Vogel, Edward; Mayr, Ulrich; Awh, Edward

    2010-01-01

    A key motivation for understanding capacity in working memory (WM) is its relationship with fluid intelligence. Recent evidence has suggested a 2-factor model that distinguishes between the number of representations that can be maintained in WM and the resolution of those representations. To determine how these factors relate to fluid intelligence, we conducted an exploratory factor analysis on multiple number-limited and resolution-limited measures of WM ability. The results strongly supported the 2-factor model, with fully orthogonal factors accounting for performance in the number-limited and resolution-limited conditions. Furthermore, the reliable relationship between WM capacity and fluid intelligence was exclusively supported by the number factor (r = .66), while the resolution factor made no reliable contribution (r = −.05). Thus, the relationship between WM capacity and standard measures of fluid intelligence is mediated by the number of representations that can be simultaneously maintained in WM rather than by the precision of those representations. PMID:21037165

  17. Stimulus discriminability in visual search.

    PubMed

    Verghese, P; Nakayama, K

    1994-09-01

    We measured the probability of detecting the target in a visual search task, as a function of the following parameters: the discriminability of the target from the distractors, the duration of the display, and the number of elements in the display. We examined the relation between these parameters at criterion performance (80% correct) to determine if the parameters traded off according to the predictions of a limited capacity model. For the three dimensions that we studied, orientation, color, and spatial frequency, the observed relationship between the parameters deviates significantly from a limited capacity model. The data relating discriminability to display duration are better than predicted over the entire range of orientation and color differences that we examined, and are consistent with the prediction for only a limited range of spatial frequency differences--from 12 to 23%. The relation between discriminability and number varies considerably across the three dimensions and is better than the limited capacity prediction for two of the three dimensions that we studied. Orientation discrimination shows a strong number effect, color discrimination shows almost no effect, and spatial frequency discrimination shows an intermediate effect. The different trading relationships in each dimension are more consistent with early filtering in that dimension, than with a common limited capacity stage. Our results indicate that higher-level processes that group elements together also play a strong role. Our experiments provide little support for limited capacity mechanisms over the range of stimulus differences that we examined in three different dimensions.

  18. Mental Capacity and Mental Health Acts part 1: advance decisions.

    PubMed

    Griffith, Richard

    The Department of Health is undertaking a review of the Mental Health Act 1983 code of practice and as part of that review has opened a consultation on what changes should be made. One key area for change is a chapter that provides clearer information about the interface between the Mental Health Act 1983 and the Mental Capacity Act 2005. Both the House of Commons Health Select Committee and the House of Lords Mental Capacity Act Committee have argued that poor understanding of the interface has led to flawed decision making by doctors and nurses. In the first of a short series of articles, Richard Griffith considers the interface between these two important statutes, beginning with advance decisions to refuse treatment (ADRT).

  19. Electronic data processing codes for California wildland plants

    Treesearch

    Merton J. Reed; W. Robert Powell; Bur S. Bal

    1963-01-01

    Systematized codes for plant names are helpful to a wide variety of workers who must record the identity of plants in the field. We have developed such codes for a majority of the vascular plants encountered on California wildlands and have published the codes in pocket size, using photo-reductions of the output from data processing machines. A limited number of the...

  20. 12 CFR 573.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... access number or access code, does not include a number or code in an encrypted form, as long as you do... account number or similar form of access number or access code for a consumer's credit card account... or access code: (1) To your agent or service provider solely in order to perform marketing for your...

  1. 17 CFR 248.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... number or access code, does not include a number or code in an encrypted form, as long as you do not... agency, an account number or similar form of access number or access code for a consumer's credit card... number or access code: (1) To your agent or service provider solely in order to perform marketing for...

  2. 12 CFR 573.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... access number or access code, does not include a number or code in an encrypted form, as long as you do... reporting agency, an account number or similar form of access number or access code for a consumer's credit... number or access code: (1) To your agent or service provider solely in order to perform marketing for...

  3. 12 CFR 573.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... access number or access code, does not include a number or code in an encrypted form, as long as you do... reporting agency, an account number or similar form of access number or access code for a consumer's credit... number or access code: (1) To your agent or service provider solely in order to perform marketing for...

  4. 12 CFR 40.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... similar form of access number or access code, does not include a number or code in an encrypted form, as... reporting agency, an account number or similar form of access number or access code for a consumer's credit... number or access code: (1) To the bank's agent or service provider solely in order to perform marketing...

  5. 17 CFR 248.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... number or access code, does not include a number or code in an encrypted form, as long as you do not... agency, an account number or similar form of access number or access code for a consumer's credit card... number or access code: (1) To your agent or service provider solely in order to perform marketing for...

  6. 12 CFR 40.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... similar form of access number or access code, does not include a number or code in an encrypted form, as... reporting agency, an account number or similar form of access number or access code for a consumer's credit... number or access code: (1) To the bank's agent or service provider solely in order to perform marketing...

  7. 12 CFR 40.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... similar form of access number or access code, does not include a number or code in an encrypted form, as... reporting agency, an account number or similar form of access number or access code for a consumer's credit... number or access code: (1) To the bank's agent or service provider solely in order to perform marketing...

  8. 17 CFR 160.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... form of access number or access code, does not include a number or code in an encrypted form, as long... consumer reporting agency, an account number or similar form of access number or access code for a consumer... similar form of access number or access code: (1) To your agent or service provider solely in order to...

  9. 12 CFR 716.12 - Limits on sharing of account number information for marketing purposes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... form of access number or access code, does not include a number or code in an encrypted form, as long... consumer reporting agency, an account number or similar form of access number or access code for a consumer... similar form of access number or access code: (1) To your agent or service provider solely in order to...

  10. 17 CFR 248.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., or similar form of access number or access code, does not include a number or code in an encrypted... consumer reporting agency, an account number or similar form of access number or access code for a consumer... similar form of access number or access code: (1) To your agent or service provider solely in order to...

  11. 12 CFR 716.12 - Limits on sharing of account number information for marketing purposes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... form of access number or access code, does not include a number or code in an encrypted form, as long... consumer reporting agency, an account number or similar form of access number or access code for a consumer... similar form of access number or access code: (1) To your agent or service provider solely in order to...

  12. 17 CFR 248.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... number or access code, does not include a number or code in an encrypted form, as long as you do not... agency, an account number or similar form of access number or access code for a consumer's credit card... number or access code: (1) To your agent or service provider solely in order to perform marketing for...

  13. 12 CFR 40.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... similar form of access number or access code, does not include a number or code in an encrypted form, as... reporting agency, an account number or similar form of access number or access code for a consumer's credit... number or access code: (1) To the bank's agent or service provider solely in order to perform marketing...

  14. 12 CFR 573.12 - Limits on sharing account number information for marketing purposes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... access number or access code, does not include a number or code in an encrypted form, as long as you do... account number or similar form of access number or access code for a consumer's credit card account... or access code: (1) To your agent or service provider solely in order to perform marketing for your...

  15. 12 CFR 716.12 - Limits on sharing of account number information for marketing purposes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... form of access number or access code, does not include a number or code in an encrypted form, as long... consumer reporting agency, an account number or similar form of access number or access code for a consumer... similar form of access number or access code: (1) To your agent or service provider solely in order to...

  16. What the success of brain imaging implies about the neural code

    PubMed Central

    Guest, Olivia; Love, Bradley C

    2017-01-01

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI. DOI: http://dx.doi.org/10.7554/eLife.21397.001 PMID:28103186

  17. An assessment of multibody simulation tools for articulated spacecraft

    NASA Technical Reports Server (NTRS)

    Man, Guy K.; Sirlin, Samuel W.

    1989-01-01

    A survey of multibody simulation codes was conducted in the spring of 1988, to obtain an assessment of the state of the art in multibody simulation codes from the users of the codes. This survey covers the most often used articulated multibody simulation codes in the spacecraft and robotics community. There was no attempt to perform a complete survey of all available multibody codes in all disciplines. Furthermore, this is not an exhaustive evaluation of even robotics and spacecraft multibody simulation codes, as the survey was designed to capture feedback on issues most important to the users of simulation codes. We must keep in mind that the information received was limited and the technical background of the respondents varied greatly. Therefore, only the most often cited observations from the questionnaire are reported here. In this survey, it was found that no one code had both many users (reports) and no limitations. The first section is a report on multibody code applications. Following applications is a discussion of execution time, which is the most troublesome issue for flexible multibody codes. The representation of component flexible bodies, which affects both simulation setup time as well as execution time, is presented next. Following component data preparation, two sections address the accessibility or usability of a code, evaluated by considering its user interface design and examining the overall simulation integrated environment. A summary of user efforts at code verification is reported, before a tabular summary of the questionnaire responses. Finally, some conclusions are drawn.

  18. [The precautionary principle and the deontology code].

    PubMed

    Glorion, B

    2000-01-01

    Some is the mode of exercise or the speciality of the doctor, the medical act always involved risks for the patient. It is essential for a doctor to know the beneficial effects of therapeutic and capacity to measure the advantages and the disadvantages of them. The Code of ethics formally says it in several of its articles and points out the need for finding a right balance, "to recognize the limits of its competence, not to make run to the patient an unjustified risk". But the progress which does not cease accelerating is increasingly carrying risks. The prevention tries to draw aside from the known risks and identified and diligence, prudence and competence remain the essential virtues of the practitioner. The precaution challenges the decision maker until mentioning still unknown risks, which, if they are carried out, could engage its responsibility. This worrying progression causes interrogations within the medical profession. It do not have for the moment not satisfactory answers. This principle seems, in first analysis, more adapted to actions of public health than to an individual medical act. To measure risks for a definite population is with the possible rigour thanks to epidemiology. To prevent the totality of the risks incurred by an individual given following a medical decision would be to deny the nature even of the man, who is by definition an individual, testimoning of his personality, of his autonomy, and of his own identity.

  19. Response surface modeling of boron adsorption from aqueous solution by vermiculite using different adsorption agents: Box-Behnken experimental design.

    PubMed

    Demirçivi, Pelin; Saygılı, Gülhayat Nasün

    2017-07-01

    In this study, a different method was applied for boron removal by using vermiculite as the adsorbent. Vermiculite, which was used in the experiments, was not modified with adsorption agents before boron adsorption using a separate process. Hexadecyltrimethylammonium bromide (HDTMA) and Gallic acid (GA) were used as adsorption agents for vermiculite by maintaining the solid/liquid ratio at 12.5 g/L. HDTMA/GA concentration, contact time, pH, initial boron concentration, inert electrolyte and temperature effects on boron adsorption were analyzed. A three-factor, three-level Box-Behnken design model combined with response surface method (RSM) was employed to examine and optimize process variables for boron adsorption from aqueous solution by vermiculite using HDTMA and GA. Solution pH (2-12), temperature (25-60 °C) and initial boron concentration (50-8,000 mg/L) were chosen as independent variables and coded x 1 , x 2 and x 3 at three levels (-1, 0 and 1). Analysis of variance was used to test the significance of variables and their interactions with 95% confidence limit (α = 0.05). According to the regression coefficients, a second-order empirical equation was evaluated between the adsorption capacity (q i ) and the coded variables tested (x i ). Optimum values of the variables were also evaluated for maximum boron adsorption by vermiculite-HDTMA (HDTMA-Verm) and vermiculite-GA (GA-Verm).

  20. On the capacity of MIMO-OFDM based diversity and spatial multiplexing in Radio-over-Fiber system

    NASA Astrophysics Data System (ADS)

    El Yahyaoui, Moussa; El Moussati, Ali; El Zein, Ghaïs

    2017-11-01

    This paper proposes a realistic and global simulation to predict the behavior of a Radio over Fiber (RoF) system before its realization. In this work we consider a 2 × 2 Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) RoF system at 60 GHz. This system is based on Spatial Diversity (SD) which increases reliability (decreases probability of error) and Spatial Multiplexing (SMX) which increases data rate, but not necessarily reliability. The 60 GHz MIMO channel model employed in this work based on a lot of measured data and statistical analysis named Triple-S and Valenzuela (TSV) model. To the authors best knowledge; it is the first time that this type of TSV channel model has been employed for 60 GHz MIMO-RoF system. We have evaluated and compared the performance of this system according to the diversity technique, modulation schemes, and channel coding rate for Line-Of-Sight (LOS) desktop environment. The SMX coded is proposed as an intermediate system to improve the Signal to Noise Ratio (SNR) and the data rate. The resulting 2 × 2 MIMO-OFDM SMX system achieves a higher data rate up to 70 Gb/s with 64QAM and Forward Error Correction (FEC) limit of 10-3 over 25-km fiber transmission followed by 3-m wireless transmission using 7 GHz bandwidth of millimeter wave band.

  1. [Design of flat field holographic concave grating for near-infrared spectrophotometer].

    PubMed

    Xiang, Xian-Yi; Wen, Zhi-Yu

    2008-07-01

    Near-infrared spectrum analysis can be used to determine the nature or test quantitatively some chemical compositions by detecting molecular double frequency and multiple frequency absorption. It has been used in agriculture, biology, petrifaction, foodstuff, medicament, spinning and other fields. Near-infrared spectrophotometer is the main apparatus for near-infrared spectrum analysis, and the grating is the most important part of the apparatus. Based on holographic concave grating theory and optic design software CODE V, a flat field holographic concave grating for near-infrared spectrophotometer was designed from primary structure, which relied on global optimization of the software. The contradiction between wide spectrum bound and limited spectrum extension was resolved, aberrations were reduced successfully, spectrum information was utilized fully, and the optic structure of spectrometer was highly efficient. Using CODE V software, complex high-order aberration equations need not be solved, the result can be evaluated quickly, flat field and resolving power can be kept in balance, and the work efficiency is also enhanced. A paradigm of flat field holographic concave grating is given, it works between 900 nm to 1 700 nm, the diameter of the concave grating is 25 mm, and F/ # is 1. 5. The design result was analyzed and evaluated. It was showed that if the slit source, whose width is 50 microm, is used to reconstruction, the theoretic resolution capacity is better than 6.3 nm.

  2. Winter photosynthesis in red spruce (Picea rubens Sarg.): limitations, potential benefits, and risks

    Treesearch

    P.G. Schaberg

    2000-01-01

    Numerous cold-induced changes in physiology limit the capacity of northern conifers to photosynthesize during winter. Studies of red spruce (Picea rubens Sarg.) have shown that rates of field photosynthesis (Pfield) and laboratory measurements of photosynthetic capacity (Pmax) generally parallel seasonal...

  3. 47 CFR 101.521 - Spectrum utilization.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... applicants for DEMS frequencies in the 10.6 GHz band must submit as part of the original application a... contain detailed descriptions of the modulation method, the channel time sharing method, any error detecting and/or correcting codes, any spatial frequency reuse system and the total data throughput capacity...

  4. Using individual differences to test the role of temporal and place cues in coding frequency modulation

    PubMed Central

    Whiteford, Kelly L.; Oxenham, Andrew J.

    2015-01-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783

  5. Using individual differences to test the role of temporal and place cues in coding frequency modulation.

    PubMed

    Whiteford, Kelly L; Oxenham, Andrew J

    2015-11-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.

  6. 77 FR 12823 - Solicitation of Comments on a Proposed Change to the Disclosure Limitation Policy for Information...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... policy for information reported on fuel ethanol production capacity, (both nameplate and maximum... fuel ethanol production capacity, (both nameplate and maximum sustainable capacity) on Form EIA-819 as... treat all information reported on fuel ethanol production capacity, (both nameplate and maximum...

  7. Code development for ships -- A demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayyub, B.; Mansour, A.E.; White, G.

    1996-12-31

    A demonstration summary of a reliability-based structural design code for ships is presented for two ship types, a cruiser and a tanker. For both ship types, code requirements cover four failure modes: hull girder bulking, unstiffened plate yielding and buckling, stiffened plate buckling, and fatigue of critical detail. Both serviceability and ultimate limit states are considered. Because of limitation on the length, only hull girder modes are presented in this paper. Code requirements for other modes will be presented in future publication. A specific provision of the code will be a safety check expression. The design variables are to bemore » taken at their nominal values, typically values in the safe side of the respective distributions. Other safety check expressions for hull girder failure that include load combination factors, as well as consequence of failure factors, are considered. This paper provides a summary of safety check expressions for the hull girder modes.« less

  8. Advanced Imaging Optics Utilizing Wavefront Coding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less

  9. DREAM-3D and the importance of model inputs and boundary conditions

    NASA Astrophysics Data System (ADS)

    Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue

    2015-04-01

    Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.

  10. Capacity for conducting systematic reviews in low- and middle-income countries: a rapid appraisal.

    PubMed

    Oliver, Sandy; Bangpan, Mukdarut; Stansfield, Claire; Stewart, Ruth

    2015-04-26

    Systematic reviews of research are increasingly recognised as important for informing decisions across policy sectors and for setting priorities for research. Although reviews draw on international research, the host institutions and countries can focus attention on their own priorities. The uneven capacity for conducting research around the world raises questions about the capacity for conducting systematic reviews. A rapid appraisal was conducted of current capacity and capacity strengthening activities for conducting systematic reviews in low- and middle-income countries (LMICs). A systems approach to analysis considered the capacity of individuals nested within the larger units of research teams, institutions that fund, support, and/or conduct systematic reviews, and systems that support systematic reviewing internationally. International systematic review networks, and their support organisations, are dominated by members from high-income countries. The largest network comprising a skilled workforce and established centres is the Cochrane Collaboration. Other networks, although smaller, provide support for systematic reviews addressing questions beyond effective clinical practice which require a broader range of methods. Capacity constraints were apparent at the levels of individuals, review teams, organisations, and system wide. Constraints at each level limited the capacity at levels nested within them. Skills training for individuals had limited utility if not allied to opportunities for review teams to practice the skills. Skills development was further constrained by language barriers, lack of support from academic organisations, and the limitations of wider systems for communication and knowledge management. All networks hosted some activities for strengthening the capacities of individuals and teams, although these were usually independent of core academic programmes and traditional career progression. Even rarer were efforts to increase demand for systematic reviews and to strengthen links between producers and potential users of systematic reviews. Limited capacity for conducting systematic reviews within LMICs presents a major technical and social challenge to advancing their health systems. Effective capacity in LMICs can be spread through investing effort at multiple levels simultaneously, supported by countries (predominantly high-income countries) with established skills and experience.

  11. Categorical Variables in Multiple Regression: Some Cautions.

    ERIC Educational Resources Information Center

    O'Grady, Kevin E.; Medoff, Deborah R.

    1988-01-01

    Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)

  12. Overview of Recent Radiation Transport Code Comparisons for Space Applications

    NASA Astrophysics Data System (ADS)

    Townsend, Lawrence

    Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.

  13. Utilization of recently developed codes for high power Brayton and Rankine cycle power systems

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    1993-01-01

    Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.

  14. Proceedings from a Workshop on Ecological Carrying Capacity of Salmonids in the Columbia River Basin : Measure 7.1A of the Northwest Power Planning Council`s 1994 Fish and Wildlife Program : Report 3 of 4, Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Gary E.; Neitzel, D.A.; Mavros, William V.

    1996-05-01

    This report contains the proceedings of a workshop held during 1995 in Portland, Oregon. The objective of the workshop was to assemble a group of experts that could help us define carrying capacity for Columbia River Basin salmonids. The workshop was one activity designed to answer the questions asked in Measure 7.1A of the Council`s Fish and Wildlife Program. Based, in part, on the information we learned during the workshop we concluded that the approach inherent in 7.1A will not increase understanding of ecology, carrying capacity, or limiting factors that influence salmon under current conditions. Measure 7.1A requires a definitionmore » of carrying capacity and a list of determinants (limiting factors) of capacity. The implication or inference then follows that by asking what we know and do not know about the determinants will lead to research that increases our understanding of what is limiting salmon survival. It is then assumed that research results will point to management actions that can remove or repair the limiting factors. Most ecologists and fisheries scientists that have studied carrying capacity clearly conclude that this approach is an oversimplification of complex ecological processes. To pursue the capacity parameter, that is, a single number or set of numbers that quantify how many salmon the basin or any part of the basin can support, is meaningless by itself and will not provide useful information.« less

  15. A Computational Model of Spatial Visualization Capacity

    ERIC Educational Resources Information Center

    Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.

    2008-01-01

    Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…

  16. Information Processing in the Cerebral Hemispheres: Selective Hemispheric Activation and Capacity Limitations.

    ERIC Educational Resources Information Center

    Hellige, Joseph B.; And Others

    1979-01-01

    Five experiments are reported concerning the effect on visual information processing of concurrently maintaining verbal information. The results suggest that the left cerebral hemisphere functions as a typical limited-capacity information processing system that can be influenced somewhat separately from the right hemisphere system. (Author/CTM)

  17. Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli

    PubMed Central

    Brady, Timothy F.; Störmer, Viola S.; Alvarez, George A.

    2016-01-01

    Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli—colors and orientations—is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up,” revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge. PMID:27325767

  18. Working memory is not fixed-capacity: More active storage capacity for real-world objects than for simple stimuli.

    PubMed

    Brady, Timothy F; Störmer, Viola S; Alvarez, George A

    2016-07-05

    Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli-colors and orientations-is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up," revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail. This boost in performance for real-world objects is generally assumed to reflect the use of a separate episodic long-term memory system, rather than working memory. Here we show that this behavioral increase in capacity with real-world objects is not solely due to the use of separate episodic long-term memory systems. In particular, we show that this increase is a result of active storage in working memory, as shown by directly measuring neural activity during the delay period of a working memory task using EEG. These data challenge fixed-capacity working memory models and demonstrate that working memory and its capacity limitations are dependent upon our existing knowledge.

  19. The business of pediatric hospital medicine.

    PubMed

    Percelay, Jack M; Zipes, David G

    2014-07-01

    Pediatric hospital medicine (PHM) programs are mission driven, not margin driven. Very rarely do professional fee revenues exceed physician billing collections. In general, inpatient hospital care codes reimburse less than procedures, payer mix is poor, and pediatric inpatient care is inherently time-consuming. Using traditional accounting principles, almost all PHM programs will have a negative bottom line in the narrow sense of program costs and revenues generated. However, well-run PHM programs contribute positively to the bottom line of the system as a whole through the value-added services hospitalists provide and hospitalists' ability to improve overall system efficiency and productivity. This article provides an overview of the business of hospital medicine with emphasis on the basics of designing and maintaining a program that attends carefully to physician staffing (the major cost component of a program) and physician charges (the major revenue component of the program). Outside of these traditional calculations, resource stewardship is discussed as a way to reduce hospital costs in a capitated or diagnosis-related group reimbursement model and further improve profit-or at least limit losses. Shortening length of stay creates bed capacity for a program already running at capacity. The article concludes with a discussion of how hospitalists add value to the system by making other providers and other parts of the hospital more efficient and productive. Copyright 2014, SLACK Incorporated.

  20. On the delay analysis of a TDMA channel with finite buffer capacity

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.

    1982-01-01

    The throughput performance of a TDMA channel with finite buffer capacity for transmitting data messages is considered. Each station has limited message buffer capacity and has Poisson message arrivals. Message arrivals will be blocked if the buffers are congested. Using the embedded Markov chain model, the solution procedure for the limiting system-size probabilities is presented in a recursive fashion. Numerical examples are given to demonstrate the tradeoffs between the blocking probabilities and the buffer sizing strategy.

  1. NASA Glenn Steady-State Heat Pipe Code Users Manual, DOS Input. Version 2

    NASA Technical Reports Server (NTRS)

    Tower, Leonard K.

    2000-01-01

    The heat pipe code LERCHP has been revised, corrected, and extended. New features include provisions for pipes with curvature and bends in "G" fields. Heat pipe limits are examined in detail and limit envelopes are shown for some sodium and lithium-filled heat pipes. Refluxing heat pipes and gas-loaded or variable conductance heat pipes were not considered.

  2. tRNA acceptor-stem and anticodon bases embed separate features of amino acid chemistry

    PubMed Central

    Carter, Charles W.; Wolfenden, Richard

    2016-01-01

    abstract The universal genetic code is a translation table by which nucleic acid sequences can be interpreted as polypeptides with a wide range of biological functions. That information is used by aminoacyl-tRNA synthetases to translate the code. Moreover, amino acid properties dictate protein folding. We recently reported that digital correlation techniques could identify patterns in tRNA identity elements that govern recognition by synthetases. Our analysis, and the functionality of truncated synthetases that cannot recognize the tRNA anticodon, support the conclusion that the tRNA acceptor stem houses an independent code for the same 20 amino acids that likely functioned earlier in the emergence of genetics. The acceptor-stem code, related to amino acid size, is distinct from a code in the anticodon that is related to amino acid polarity. Details of the acceptor-stem code suggest that it was useful in preserving key properties of stereochemically-encoded peptides that had developed the capacity to interact catalytically with RNA. The quantitative embedding of the chemical properties of amino acids into tRNA bases has implications for the origins of molecular biology. PMID:26595350

  3. Combining local and global limitations of visual search.

    PubMed

    Põder, Endel

    2017-04-01

    There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.

  4. How Can the Operating Environment for Nutrition Research Be Improved in Sub-Saharan Africa? The Views of African Researchers

    PubMed Central

    Van Royen, Kathleen; Lachat, Carl; Holdsworth, Michelle; Smit, Karlien; Kinabo, Joyce; Roberfroid, Dominique; Nago, Eunice; Garimoi Orach, Christopher; Kolsteren, Patrick

    2013-01-01

    Optimal nutrition is critical for human development and economic growth. Sub-Saharan Africa is facing high levels of food insecurity and only few sub-Saharan African countries are on track to eradicate extreme poverty and hunger by 2015. Effective research capacity is crucial for addressing emerging challenges and designing appropriate mitigation strategies in sub-Saharan Africa. A clear understanding of the operating environment for nutrition research in sub-Saharan Africa is a much needed prerequisite. We collected data on the barriers and requirements for conducting nutrition research in sub-Saharan Africa through semi-structured interviews with 144 participants involved in nutrition research in 35 countries in sub-Saharan Africa. A total of 133 interviews were retained for coding. The main barriers identified for effective nutrition research were the lack of funding due to poor recognition by policymakers of the importance of nutrition research and under-utilisation of research findings for developing policy, as well as an absence of research priority setting from within Africa. Current research topics were perceived to be mainly determined by funding bodies from outside Africa. Nutrition researchers argued for more commitment from policymakers at national level. The low capacity for nutrition research was mainly seen as a consequence of insufficient numbers of nutrition researchers, limited skills and a poor research infrastructure. In conclusion, African nutrition researchers argued how research priorities need to be identified by African stakeholders, accompanied by consensus building to enable creating a problem-driven national research agenda. In addition, it was considered necessary to promote interactions among researchers, and between researchers and policymakers. Multidisciplinary research and international and cross-African collaboration were seen as crucial to build capacity in sub-Saharan nutrition research. PMID:23776663

  5. 77 FR 25716 - Fipronil; Receipt of Applications for Emergency Exemptions, Solicitation of Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-01

    ... pesticide manufacturer. Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide... your concerns and suggest alternatives. vii. Explain your views as clearly as possible, avoiding the...

  6. 75 FR 28155 - Acephate, Cacodylic acid, Dicamba, Dicloran et al.; Proposed Tolerance Actions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ... (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This..., including food service, manufacturing and processing establishments, such as restaurants, cafeterias... concentration shall be limited to a maximum of 1.0 percent active ingredient. Contamination of food or food...

  7. 76 FR 26194 - Metarhizium anisopliae Strain F52; Exemption From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... sensitization--guinea pig (Harmonized Guideline 870.2600; MRID No. 448447-15). An acceptable dermal... pesticide manufacturer. Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitpas, Guillaume; Whitesides, Russel

    UQHCCI_2 propagates the uncertainties of mass-average quantities (temperature, heat capacity ratio) and the output performances (IMEP, heat release, CA50 and RI) of a HCCI engine test bench using the pressure trace, and intake and exhaust molar fraction and IVC temperature distributions, as inputs (those inputs may be computed using another code UQHCCI_2, or entered independently).

  9. 75 FR 51615 - Establishment of Pakistan and Afghanistan Support Office

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-23

    ... Establishment of Pakistan and Afghanistan Support Office By the authority vested in me as President by the... 3161 of title 5, United States Code, a temporary organization to be known as the Pakistan and... strengthening the governments in Afghanistan and Pakistan, enhancing the capacity of those governments to resist...

  10. 32 CFR 534.2 - Allowable expenses for reporters.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... MILITARY COURT FEES § 534.2 Allowable expenses for reporters. (a) General. Reporters appointed under the Uniform Code of Military Justice, Article 28, are entitled to payment for their services in such capacity... to or greater than one-half hour, actually spent in court during the trial or hearing. A fractional...

  11. 32 CFR 534.2 - Allowable expenses for reporters.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... MILITARY COURT FEES § 534.2 Allowable expenses for reporters. (a) General. Reporters appointed under the Uniform Code of Military Justice, Article 28, are entitled to payment for their services in such capacity... to or greater than one-half hour, actually spent in court during the trial or hearing. A fractional...

  12. 32 CFR 534.2 - Allowable expenses for reporters.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... MILITARY COURT FEES § 534.2 Allowable expenses for reporters. (a) General. Reporters appointed under the Uniform Code of Military Justice, Article 28, are entitled to payment for their services in such capacity... to or greater than one-half hour, actually spent in court during the trial or hearing. A fractional...

  13. 24 CFR 248.420 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Definitions. 248.420 Section 248... OF LOW INCOME HOUSING MORTGAGES Technical Assistance and Capacity Building § 248.420 Definitions...) of the Internal Revenue Code of 1986; (2) Has been in existence for at least two years prior to the...

  14. 24 CFR 248.420 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Definitions. 248.420 Section 248... OF LOW INCOME HOUSING MORTGAGES Technical Assistance and Capacity Building § 248.420 Definitions...) of the Internal Revenue Code of 1986; (2) Has been in existence for at least two years prior to the...

  15. 24 CFR 248.420 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Definitions. 248.420 Section 248... OF LOW INCOME HOUSING MORTGAGES Technical Assistance and Capacity Building § 248.420 Definitions...) of the Internal Revenue Code of 1986; (2) Has been in existence for at least two years prior to the...

  16. 24 CFR 248.420 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Definitions. 248.420 Section 248... OF LOW INCOME HOUSING MORTGAGES Technical Assistance and Capacity Building § 248.420 Definitions...) of the Internal Revenue Code of 1986; (2) Has been in existence for at least two years prior to the...

  17. 24 CFR 248.420 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Definitions. 248.420 Section 248... OF LOW INCOME HOUSING MORTGAGES Technical Assistance and Capacity Building § 248.420 Definitions...) of the Internal Revenue Code of 1986; (2) Has been in existence for at least two years prior to the...

  18. Active Cooperation Between Primary Users and Cognitive Radio Users in Heterogeneous Ad-Hoc Networks

    DTIC Science & Technology

    2012-04-01

    processing to wireless communications and networking, including space-time coding and modulation for MIMO wireless communications, MIMO - OFDM systems, and...multiinput-multioutput ( MIMO ) system that can significantly increase the link capacity and realize a new form of spatial diversity which has been termed

  19. Climate change and temperature-dependent biogeography: oxygen limitation of thermal tolerance in animals.

    PubMed

    Pörtner, H O

    2001-04-01

    Recent years have shown a rise in mean global temperatures and a shift in the geographical distribution of ectothermic animals. For a cause and effect analysis the present paper discusses those physiological processes limiting thermal tolerance. The lower heat tolerance in metazoa compared with unicellular eukaryotes and bacteria suggests that a complex systemic rather than molecular process is limiting in metazoa. Whole-animal aerobic scope appears as the first process limited at low and high temperatures, linked to the progressively insufficient capacity of circulation and ventilation. Oxygen levels in body fluids may decrease, reflecting excessive oxygen demand at high temperatures or insufficient aerobic capacity of mitochondria at low temperatures. Aerobic scope falls at temperatures beyond the thermal optimum and vanishes at low or high critical temperatures when transition to an anaerobic mitochondrial metabolism occurs. The adjustment of mitochondrial densities on top of parallel molecular or membrane adjustments appears crucial for maintaining aerobic scope and for shifting thermal tolerance. In conclusion, the capacity of oxygen delivery matches full aerobic scope only within the thermal optimum. At temperatures outside this range, only time-limited survival is supported by residual aerobic scope, then anaerobic metabolism and finally molecular protection by heat shock proteins and antioxidative defence. In a cause and effect hierarchy, the progressive increase in oxygen limitation at extreme temperatures may even enhance oxidative and denaturation stress. As a corollary, capacity limitations at a complex level of organisation, the oxygen delivery system, define thermal tolerance limits before molecular functions become disturbed.

  20. [Variations in patient data coding affect hospital standardized mortality ratio (HSMR)].

    PubMed

    van den Bosch, Wim F; Silberbusch, Joseph; Roozendaal, Klaas J; Wagner, Cordula

    2010-01-01

    To investigate the impact of coding variations on 'hospital standardized mortality ratio' (HSMR) and to define variation reduction measures. Retrospective, descriptive. We analysed coding variations in HSMR parameters for main diagnosis, urgency of the admission and comorbidity in the national medical registration (LMR) database of admissions in 6 Dutch top clinical hospitals during 2003-2007. More than a quarter of these admission records had been included in the HSMR calculation. Admissions with ICD-9 main diagnosis codes that were excluded from HSMR calculations were investigated for inter-hospital variability and correct exclusion. Variation in coding admission type was signalled by analyzing admission records with diagnoses that had an emergency nature by their title. Variation in the average number of comorbidity diagnoses per admission was determined as an indicator for coding variation. Interviews with coding teams were used to check whether the conclusions of the analysis were correct. Over 165,000 admissions that were excluded from HSMR calculations showed large variability between hospitals. This figure was 40% of all admissions that were included. Of the admissions with a main diagnosis indicating an emergency, 34% to 93% were recorded as an emergency. The average number of comorbidity diagnoses varied between hospitals from 0.9 to 3.0 per admission. Coding of main diagnoses, urgency of admission and comorbidities showed strong inter-hospital variation with a potentially large impact on the HSMR outcomes of the hospitals. Coding variations originated from differences in interpretation of coding rules, differences in coding capacity, quality of patient records and discharge documentation and timely delivery of these.

  1. Dual-balanced detection scheme with optical hard-limiters in an optical code division multiple access system

    NASA Astrophysics Data System (ADS)

    Liu, Maw-Yang; Hsu, Yi-Kai

    2017-03-01

    Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.

  2. Risk Informed Design and Analysis Criteria for Nuclear Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salmon, Michael W.

    2015-06-17

    Target performance can be achieved by defining design basis ground motion from results of a probabilistic seismic hazards assessment, and introducing known levels of conservatism in the design above the DBE. ASCE 4, 43, DOE-STD-1020 defined the DBE at 4x10-4 and introduce only slight levels of conservatism in response. ASCE 4, 43, DOE-STD-1020 assume code capacities shoot for about 98% NEP. There is a need to have a uniform target (98% NEP) for code developers (ACI, AISC, etc.) to aim for. In considering strengthening options, one must also consider cost/risk reduction achieved.

  3. Source Listings for Computer Code SPIRALI Incompressible, Turbulent Spiral Grooved Cylindrical and Face Seals

    NASA Technical Reports Server (NTRS)

    Walowit, Jed A.; Shapiro, Wibur

    2005-01-01

    This is the source listing of the computer code SPIRALI which predicts the performance characteristics of incompressible cylindrical and face seals with or without the inclusion of spiral grooves. Performance characteristics include load capacity (for face seals), leakage flow, power requirements and dynamic characteristics in the form of stiffness, damping and apparent mass coefficients in 4 degrees of freedom for cylindrical seals and 3 degrees of freedom for face seals. These performance characteristics are computed as functions of seal and groove geometry, load or film thickness, running and disturbance speeds, fluid viscosity, and boundary pressures.

  4. Extending the imaging volume for biometric iris recognition.

    PubMed

    Narayanswamy, Ramkumar; Johnson, Gregory E; Silveira, Paulo E X; Wach, Hans B

    2005-02-10

    The use of the human iris as a biometric has recently attracted significant interest in the area of security applications. The need to capture an iris without active user cooperation places demands on the optical system. Unlike a traditional optical design, in which a large imaging volume is traded off for diminished imaging resolution and capacity for collecting light, Wavefront Coded imaging is a computational imaging technology capable of expanding the imaging volume while maintaining an accurate and robust iris identification capability. We apply Wavefront Coded imaging to extend the imaging volume of the iris recognition application.

  5. Oscillator Neural Network Retrieving Sparsely Coded Phase Patterns

    NASA Astrophysics Data System (ADS)

    Aoyagi, Toshio; Nomura, Masaki

    1999-08-01

    Little is known theoretically about the associative memory capabilities of neural networks in which information is encoded not only in the mean firing rate but also in the timing of firings. Particularly, in the case of sparsely coded patterns, it is biologically important to consider the timings of firings and to study how such consideration influences storage capacities and quality of recalled patterns. For this purpose, we propose a simple extended model of oscillator neural networks to allow for expression of a nonfiring state. Analyzing both equilibrium states and dynamical properties in recalling processes, we find that the system possesses good associative memory.

  6. Reliability and throughput issues for optical wireless and RF wireless systems

    NASA Astrophysics Data System (ADS)

    Yu, Meng

    The fast development of wireless communication technologies has two main trends. On one hand, in point-to-point communications, the demand for higher throughput called for the emergence of wireless broadband techniques including optical wireless (OW). One the other hand, wireless networks are becoming pervasive. New application of wireless networks ask for more flexible system infrastructures beyond the point-to-point prototype to achieve better performance. This dissertation investigates two topics on the reliability and throughput issues of new wireless technologies. The first topic is to study the capacity, and practical forward error control strategies for OW systems. We investigate the performance of OW systems under weak atmospheric turbulence. We first investigate the capacity and power allocation for multi-laser and multi-detector systems. Our results show that uniform power allocation is a practically optimal solution for paralleled channels. We also investigate the performance of Reed Solomon (RS) codes and turbo codes for OW systems. We present RS codes as good candidates for OW systems. The second topic targets user cooperation in wireless networks. We evaluate the relative merits of amplify-forward (AF) and decode-forward (DF) in practical scenarios. Both analysis and simulations show that the overall system performance is critically affected by the quality of the inter-user channel. Following this result, we investigate two schemes to improve the overall system performance. We first investigate the impact of the relay location on the overall system performance and determine the optimal location of relay. A best-selective single-relay 1 system is proposed and evaluated. Through the analysis of the average capacity and outage, we show that a small candidate pool of 3 to 5 relays suffices to reap most of the "geometric" gain available to a selective system. Second, we propose a new user cooperation scheme to provide an effective better inter-user channel. Most user cooperation protocols work in a time sharing manner, where a node forwards others' messages and sends its own message at different sections within a provisioned time slot. In the proposed scheme the two messages are encoded together in a single codework using network coding and transmitted in the given time slot. We also propose a general multiple-user cooperation framework. Under this framework, we show that network coding can achieve better diversity and provide effective better inter-user channels than time sharing. The last part of the dissertation focuses on multi-relay packet transmission. We propose an adaptive and distributive coding scheme for the relay nodes to adaptively cooperate and forward messages. The adaptive scheme shows performance gain over fixed schemes. Then we shift our viewpoint and represent the network as part of encoders and part of decoders.

  7. Effects of capacity limits, memory loss, and sound type in change deafness.

    PubMed

    Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S

    2017-11-01

    Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.

  8. Synaptic efficacy shapes resource limitations in working memory.

    PubMed

    Krishnan, Nikhil; Poll, Daniel B; Kilpatrick, Zachary P

    2018-06-01

    Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a circuit-based explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.

  9. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks.

    PubMed

    Dai, Lengshi; Shinn-Cunningham, Barbara G

    2016-01-01

    Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.

  10. No more freeways : urban land use-transportation dynamics without freeway capacity expansion

    DOT National Transportation Integrated Search

    2011-01-01

    Observations of the various limitations of freeway capacity expansion have led to a provocative planning and policy question What if we : completely stop building additional freeway capacity. From a theoretical perspective, as a freeway transport...

  11. 14 CFR 125.93 - Airplane limitations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Airplane limitations. 125.93 Section 125.93...: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6,000 POUNDS OR MORE; AND RULES GOVERNING PERSONS ON BOARD SUCH AIRCRAFT Airplane Requirements § 125.93 Airplane...

  12. 14 CFR 125.93 - Airplane limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Airplane limitations. 125.93 Section 125.93...: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6,000 POUNDS OR MORE; AND RULES GOVERNING PERSONS ON BOARD SUCH AIRCRAFT Airplane Requirements § 125.93 Airplane...

  13. 14 CFR 125.93 - Airplane limitations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Airplane limitations. 125.93 Section 125.93...: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6,000 POUNDS OR MORE; AND RULES GOVERNING PERSONS ON BOARD SUCH AIRCRAFT Airplane Requirements § 125.93 Airplane...

  14. 14 CFR 125.93 - Airplane limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Airplane limitations. 125.93 Section 125.93...: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6,000 POUNDS OR MORE; AND RULES GOVERNING PERSONS ON BOARD SUCH AIRCRAFT Airplane Requirements § 125.93 Airplane...

  15. 14 CFR 125.93 - Airplane limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Airplane limitations. 125.93 Section 125.93...: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6,000 POUNDS OR MORE; AND RULES GOVERNING PERSONS ON BOARD SUCH AIRCRAFT Airplane Requirements § 125.93 Airplane...

  16. A Formal Model of Capacity Limits in Working Memory

    ERIC Educational Resources Information Center

    Oberauer, Klaus; Kliegl, Reinhold

    2006-01-01

    A mathematical model of working-memory capacity limits is proposed on the key assumption of mutual interference between items in working memory. Interference is assumed to arise from overwriting of features shared by these items. The model was fit to time-accuracy data of memory-updating tasks from four experiments using nonlinear mixed effect…

  17. Dividing Attention within and between Hemispheres: Testing a Multiple Resources Approach to Limited-Capacity Information Processing.

    ERIC Educational Resources Information Center

    Friedman, Alinda; And Others

    1982-01-01

    Two experiments tested the limiting case of a multiple resources approach to resource allocation in information processing. Results contradict a single-capacity model, supporting the idea that the hemispheres' resource supplies are independent and have implications for both cerebral specialization and divided attention issues. (Author/PN)

  18. A satellite mobile communication system based on Band-Limited Quasi-Synchronous Code Division Multiple Access (BLQS-CDMA)

    NASA Technical Reports Server (NTRS)

    Degaudenzi, R.; Elia, C.; Viola, R.

    1990-01-01

    Discussed here is a new approach to code division multiple access applied to a mobile system for voice (and data) services based on Band Limited Quasi Synchronous Code Division Multiple Access (BLQS-CDMA). The system requires users to be chip synchronized to reduce the contribution of self-interference and to make use of voice activation in order to increase the satellite power efficiency. In order to achieve spectral efficiency, Nyquist chip pulse shaping is used with no detection performance impairment. The synchronization problems are solved in the forward link by distributing a master code, whereas carrier forced activation and closed loop control techniques have been adopted in the return link. System performance sensitivity to nonlinear amplification and timing/frequency synchronization errors are analyzed.

  19. An information capacity limitation of visual short-term memory.

    PubMed

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2014-12-01

    Research suggests that visual short-term memory (VSTM) has both an item capacity, of around 4 items, and an information capacity. We characterize the information capacity limits of VSTM using a task in which observers discriminated the orientation of a single probed item in displays consisting of 1, 2, 3, or 4 orthogonally oriented Gabor patch stimuli that were presented in noise for 50 ms, 100 ms, 150 ms, or 200 ms. The observed capacity limitations are well described by a sample-size model, which predicts invariance of ∑(i)(d'(i))² for displays of different sizes and linearity of (d'(i))² for displays of different durations. Performance was the same for simultaneous and sequentially presented displays, which implicates VSTM as the locus of the observed invariance and rules out explanations that ascribe it to divided attention or stimulus encoding. The invariance of ∑(i)(d'(i))² is predicted by the competitive interaction theory of Smith and Sewell (2013), which attributes it to the normalization of VSTM traces strengths arising from competition among stimuli entering VSTM. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. Damping in Space Constructions

    NASA Astrophysics Data System (ADS)

    de Vreugd, Jan; de Lange, Dorus; Winters, Jasper; Human, Jet; Kamphues, Fred; Tabak, Erik

    2014-06-01

    Monolithic structures are often used in optomechanical designs for space applications to achieve high dimensional stability and to prevent possible backlash and friction phenomena. The capacity of monolithic structures to dissipate mechanical energy is however limited due to the high Q-factor, which might result in high stresses during dynamic launch loads like random vibration, sine sweeps and shock. To reduce the Q-factor in space applications, the effect of constrained layer damping (CLD) is investigated in this work. To predict the damping increase, the CLD effect is implemented locally at the supporting struts in an existing FE model of an optical instrument. Numerical simulations show that the effect of local damping treatment in this instrument could reduce the vibrational stresses with 30-50%. Validation experiments on a simple structure showed good agreement between measured and predicted damping properties. This paper presents material characterization, material modeling, numerical implementation of damping models in finite element code, numerical results on space hardware and the results of validation experiments.

Top