NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Mixture block coding with progressive transmission in packet video. Appendix 1: Item 2. M.S. Thesis
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung
1989-01-01
Video transmission will become an important part of future multimedia communication because of dramatically increasing user demand for video, and rapid evolution of coding algorithm and VLSI technology. Video transmission will be part of the broadband-integrated services digital network (B-ISDN). Asynchronous transfer mode (ATM) is a viable candidate for implementation of B-ISDN due to its inherent flexibility, service independency, and high performance. According to the characteristics of ATM, the information has to be coded into discrete cells which travel independently in the packet switching network. A practical realization of an ATM video codec called Mixture Block Coding with Progressive Transmission (MBCPT) is presented. This variable bit rate coding algorithm shows how a constant quality performance can be obtained according to user demand. Interactions between codec and network are emphasized including packetization, service synchronization, flow control, and error recovery. Finally, some simulation results based on MBCPT coding with error recovery are presented.
Implementation issues in source coding
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.
1989-01-01
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.
NASA Astrophysics Data System (ADS)
Hastuty, I. P.; Sembiringand Nursyamsi, I. S.
2018-02-01
Paving block is one of the material used as the top layer of road structure besides asphalt and concrete paving block is usually made of mixed material such as Portland cement or other adhesive material, water, and aggregate. People nowadays prefer paving block compared to other pavement such as concrete or asphalt. Their interest toward the use of paving block increase because paving block is an eco-friendly construction which is very useful in helping soil water conservation, can be done faster, has easier installation and maintenance, has a variety of shades that increase the aesthetic value, also costs cheaper than the other. Preparation of the specimens with a mixture of Sinabung ash and a mixture of Sinabung ash and lime are implemented with a mixture ratio of cement : sand : stone ash is 1: 2 : 3. The mixture is used as a substitute material by reducing the percentage amount of the weight of the cement with the composition ratio variation based on the comparative volume category of the paving block aggregate, i.e. 0%, 5%, 10%, 15%, 20%, and 25%. The result of this research shows that the maximum compressive strength value is 42.27 Mpa, it was obtained from a mixture of 10% lime with curing time 28 days. The maximum compressive strength value which is obtained from the mixture of sinabung ash is 41.60 Mpa, it was obtained from a mixture of 15% sinabung ash. From the use of these two materials, paving blocks produced are classified as paving blocks quality A and B (350 - 400 Mpa) in accordance to specification from SNI 03-0691-1996.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
A robust low-rate coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.; Arikan, E. (Editor)
1991-01-01
Due to the rapidly evolving field of image processing and networking, video information promises to be an important part of telecommunication systems. Although up to now video transmission has been transported mainly over circuit-switched networks, it is likely that packet-switched networks will dominate the communication world in the near future. Asynchronous transfer mode (ATM) techniques in broadband-ISDN can provide a flexible, independent and high performance environment for video communication. For this paper, the network simulator was used only as a channel in this simulation. Mixture blocking coding with progressive transmission (MBCPT) has been investigated for use over packet networks and has been found to provide high compression rate with good visual performance, robustness to packet loss, tractable integration with network mechanics and simplicity in parallel implementation.
Li, Yongliang; Jiang, Tao; Lin, Shaoliang; Lin, Jiaping; Cai, Chunhua; Zhu, Xingyu
2015-01-01
Self-assembly behavior of a mixture system containing rod-coil block copolymers and rigid homopolymers was investigated by using Brownian dynamics simulations. The morphologies of formed hierarchical self-assemblies were found to be dependent on the Lennard-Jones (LJ) interaction εRR between rod blocks, lengths of rod and coil blocks in copolymer, and mixture ratio of block copolymers to homopolymers. As the εRR value decreases, the self-assembled structures of mixtures are transformed from an abacus-like structure to a helical structure, to a plain fiber, and finally are broken into unimers. The order parameter of rod blocks was calculated to confirm the structure transition. Through varying the length of rod and coil blocks, the regions of thermodynamic stability of abacus, helix, plain fiber, and unimers were mapped. Moreover, it was discovered that two levels of rod block ordering exist in the helices. The block copolymers are helically wrapped on the homopolymer bundles to form helical string, while the rod blocks are twistingly packed inside the string. In addition, the simulation results are in good agreement with experimental observations. The present work reveals the mechanism behind the formation of helical (experimentally super-helical) structures and may provide useful information for design and preparation of the complex structures. PMID:25965726
Li, Xue; Zhao, Shuying; Zhang, Shuxiang; Kim, Dong Ha; Knoll, Wolfgang
2007-06-19
Inorganic compound HAuCl4, which can form a complex with pyridine, is introduced into a poly(styrene-block-2-vinylpyridine) (PS-b-P2VP) block copolymer/poly(methyl methacrylate) (PMMA) homopolymer mixture. The orientation of the cylindrical microdomains formed by the P2VP block, PMMA, and HAuCl4 normal to the substrate surface can be generated via cooperative self-assembly of the mixture. Selective removal of the homopolymer can lead to porous nanostructures containing metal components in P2VP domains, which have a novel photoluminescence property.
Synthesis and solution self-assembly of side-chain cobaltocenium-containing block copolymers.
Ren, Lixia; Hardy, Christopher G; Tang, Chuanbing
2010-07-07
The synthesis of side-chain cobaltocenium-containing block copolymers and their self-assembly in solution was studied. Highly pure monocarboxycobaltocenium was prepared and subsequently attached to side chains of poly(tert-butyl acrylate)-block-poly(2-hydroxyethyl acrylate), yielding poly(tert-butyl acrylate)-block-poly(2-acryloyloxyethyl cobaltoceniumcarboxylate). The cobaltocenium block copolymers exhibited vesicle morphology in the mixture of acetone and water, while micelles of nanotubes were formed in the mixture of acetone and chloroform.
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Effect of nanoscale morphology on selective ethanol transport through block copolymer membranes
USDA-ARS?s Scientific Manuscript database
We report on the effect of block copolymer domain size on transport of liquid mixtures through the membranes by presenting pervaporation data of an 8 wt% ethanol/water mixture through A-B-A and B-A-B triblock copolymer membranes. The A-block was chosen to facilitate ethanol transport while the B-blo...
2008-09-01
Convolutional Encoder Block Diagram of code rate 1 2 r = and...most commonly used along with block codes . They were introduced in 1955 by Elias [7]. Convolutional codes are characterized by the code rate kr n... convolutional code for 1 2 r = and = 3κ , namely [7 5], is used. Figure 2 Convolutional Encoder Block Diagram of code rate 1 2 r = and
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Block-based scalable wavelet image codec
NASA Astrophysics Data System (ADS)
Bao, Yiliang; Kuo, C.-C. Jay
1999-10-01
This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Ion Transport in Nanostructured Block Copolymer/Ionic Liquid Membranes
NASA Astrophysics Data System (ADS)
Hoarfrost, Megan Lane
Incorporating an ionic liquid into one block copolymer microphase provides a platform for combining the outstanding electrochemical properties of ionic liquids with a number of favorable attributes provided by block copolymers. In particular, block copolymers thermodynamically self-assemble into well-ordered nanostructures, which can be engineered to provide a durable mechanical scaffold and template the ionic liquid into continuous ion-conducting nanochannels. Understanding how the addition of an ionic liquid affects the thermodynamic self-assembly of block copolymers, and how the confinement of ionic liquids to block copolymer nanodomains affects their ion-conducting properties is essential for predictable structure-property control. The lyotropic phase behavior of block copolymer/ionic liquid mixtures is shown to be reminiscent of mixtures of block copolymers with selective molecular solvents. A variety of ordered microstructures corresponding to lamellae, hexagonally close-packed cylinders, body-centered cubic, and face-centered cubic oriented micelles are observed in a model system composed of mixtures of imidazolium bis(trifluoromethylsulfonyl)imide ([Im][TFSI]) and poly(styrene-
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
Ice/water slurry blocking phenomenon at a tube orifice.
Hirochi, Takero; Yamada, Shuichi; Shintate, Tuyoshi; Shirakashi, Masataka
2002-10-01
The phenomenon of ice-particle/water mixture blocking flow through a pipeline is a problem that needs to be solved before mixture flow can be applied for practical use in cold energy transportation in a district cooling system. In this work, the blocking mechanism of ice-particle slurry at a tube orifice is investigated and a criterion for blocking is presented. The cohesive nature of ice particles is shown to cause compressed plug type blocking and the compressive yield stress of a particle cluster is presented as a measure for the cohesion strength of ice particles.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Synergism and Combinatorial Coding for Binary Odor Mixture Perception in Drosophila
Chakraborty, Tuhin Subhra; Siddiqi, Obaid
2016-01-01
Most odors in the natural environment are mixtures of several compounds. Olfactory receptors housed in the olfactory sensory neurons detect these odors and transmit the information to the brain, leading to decision-making. But whether the olfactory system detects the ingredients of a mixture separately or treats mixtures as different entities is not well understood. Using Drosophila melanogaster as a model system, we have demonstrated that fruit flies perceive binary odor mixtures in a manner that is heavily dependent on both the proportion and the degree of dilution of the components, suggesting a combinatorial coding at the peripheral level. This coding strategy appears to be receptor specific and is independent of interneuronal interactions. PMID:27588303
FBCOT: a fast block coding option for JPEG 2000
NASA Astrophysics Data System (ADS)
Taubman, David; Naman, Aous; Mathew, Reji
2017-09-01
Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).
Upadya, Madhusudan; Neeta, S; Manissery, Jesni Joseph; Kuriakose, Nigel; Singh, Rakesh Raushan
2016-01-01
Background and Aims: Bupivacaine is available in isobaric and hyperbaric forms for intrathecal use and opioids are used as additives to modify their effects. The aim of this study was to compare the efficacy and haemodynamic effect of intrathecal isobaric bupivacaine-fentanyl mixture and hyperbaric bupivacaine-fentanyl mixture in common urological procedures. Methods: One hundred American Society of Anesthesiologists physical status 1 and 2 patients undergoing urological procedures were randomized into two groups. Group 1 received 3 ml of 0.5% isobaric bupivacaine with 25 μg fentanyl while Group 2 received 3 ml of 0.5% hyperbaric bupivacaine with 25 μg fentanyl. The parameters measured include heart rate, blood pressure, respiratory rate, onset and duration of motor and sensory blockade. Student's unpaired t-test and the χ2 test were used to analyse the results, using the SPSS version 11.5 software. Results: The haemodynamic stability was better with isobaric bupivacaine fentanyl mixture (Group 1) than with hyperbaric bupivacaine fentanyl mixture (Group 2). The mean onset time in Group 1 for both sensory block (4 min) and motor block (5 min) was longer compared with Group 2. The duration of sensory block (127.8 ± 38.64 min) and motor block (170.4 ± 27.8 min) was less with isobaric bupivacaine group compared with hyperbaric bupivacaine group (sensory blockade 185.4 ± 16.08 min and motor blockade 201.6 ± 14.28 min). Seventy percent of patients in Group 2 had maximum sensory block level of T6 whereas it was 53% in Group 1. More patients in Group 1 required sedation compared to Group 2. Conclusion: Isobaric bupivacaine fentanyl mixture was found to provide adequate anaesthesia with minimal incidence of haemodynamic instability. PMID:26962255
Discrete Cosine Transform Image Coding With Sliding Block Codes
NASA Astrophysics Data System (ADS)
Divakaran, Ajay; Pearlman, William A.
1989-11-01
A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.
NASA Astrophysics Data System (ADS)
Kwak, Jongheon; Han, Sunghyun; Kim, Jin Kon
2014-03-01
A binary mixture of two block copolymers whose blocks are capable of forming the hydrogen bonding allows one to obtain various microdomains that could not be expected for neat block copolymer. For instance, the binary blend of symmetric polystyrene-block-poly(2-vinylpyridine) copolymer (PS-b-P2VP) and polystyrene-block-polyhydroxystyrene copolymer (PS-b-PHS) blends where the hydrogen bonding occurred between P2VP and PHS showed hexagonally packed (HEX) cylindrical and body centered cubic (BCC) spherical microdomains. To know the exact location of short block copolymer chains at the interface, we synthesized deuterated polystyrene-block-polyhydroxystyrene copolymer (dPS-b-PHS) and prepared a binary mixture with PS-b-P2VP. We investigate, via small angle X-ray scattering (SAXS) and neutron reflectivity (NR), the exact location of shorter dPS block chain near the interface of the microdomains.
Surface code implementation of block code state distillation.
Fowler, Austin G; Devitt, Simon J; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Surface code implementation of block code state distillation
Fowler, Austin G.; Devitt, Simon J.; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
On the Application of Time-Reversed Space-Time Block Code to Aeronautical Telemetry
2014-06-01
Keying (SOQPSK), bit error rate (BER), Orthogonal Frequency Division Multiplexing ( OFDM ), Generalized time-reversed space-time block codes (GTR-STBC) 16...Alamouti code [4]) is optimum [2]. Although OFDM is generally applied on a per subcarrier basis in frequency selective fading, it is not a viable...Calderbank, “Finite-length MIMO decision feedback equal- ization for space-time block-coded signals over multipath-fading channels,” IEEE Transac- tions on
De Lisi, Rosario; Milioto, Stefania; Muratore, Nicola
2009-01-01
The thermodynamics of conventional surfactants, block copolymers and their mixtures in water was described to the light of the enthalpy function. The two methodologies, i.e. the van’t Hoff approach and the isothermal calorimetry, used to determine the enthalpy of micellization of pure surfactants and block copolymers were described. The van’t Hoff method was critically discussed. The aqueous copolymer+surfactant mixtures were analyzed by means of the isothermal titration calorimetry and the enthalpy of transfer of the copolymer from the water to the aqueous surfactant solutions. Thermodynamic models were presented to show the procedure to extract straightforward molecular insights from the bulk properties. PMID:19742173
Knowledge and Processes in Design
1992-09-03
Orqanization Name(s) and Address(es). Self-explanatory. Block 16. Price Code. Enter approoriate price Block 8. Performing Organization Report code...NTIS on/y). Number. Enter the unique alphanumerc report number(s) assigned by the organization periorming the report. Blocks 17.-19...statement codings were then organized into larger control-flow structures centered around design components called modules. The general assumption was
Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.
2018-03-01
Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
A Synchronization Algorithm and Implementation for High-Speed Block Codes Applications. Part 4
NASA Technical Reports Server (NTRS)
Lin, Shu; Zhang, Yu; Nakamura, Eric B.; Uehara, Gregory T.
1998-01-01
Block codes have trellis structures and decoders amenable to high speed CMOS VLSI implementation. For a given CMOS technology, these structures enable operating speeds higher than those achievable using convolutional codes for only modest reductions in coding gain. As a result, block codes have tremendous potential for satellite trunk and other future high-speed communication applications. This paper describes a new approach for implementation of the synchronization function for block codes. The approach utilizes the output of the Viterbi decoder and therefore employs the strength of the decoder. Its operation requires no knowledge of the signal-to-noise ratio of the received signal, has a simple implementation, adds no overhead to the transmitted data, and has been shown to be effective in simulation for received SNR greater than 2 dB.
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Adaptive bit plane quadtree-based block truncation coding for image compression
NASA Astrophysics Data System (ADS)
Li, Shenda; Wang, Jin; Zhu, Qing
2018-04-01
Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.
Encoders for block-circulant LDPC codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)
2009-01-01
Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.
Maximum-likelihood soft-decision decoding of block codes using the A* algorithm
NASA Technical Reports Server (NTRS)
Ekroot, L.; Dolinar, S.
1994-01-01
The A* algorithm finds the path in a finite depth binary tree that optimizes a function. Here, it is applied to maximum-likelihood soft-decision decoding of block codes where the function optimized over the codewords is the likelihood function of the received sequence given each codeword. The algorithm considers codewords one bit at a time, making use of the most reliable received symbols first and pursuing only the partially expanded codewords that might be maximally likely. A version of the A* algorithm for maximum-likelihood decoding of block codes has been implemented for block codes up to 64 bits in length. The efficiency of this algorithm makes simulations of codes up to length 64 feasible. This article details the implementation currently in use, compares the decoding complexity with that of exhaustive search and Viterbi decoding algorithms, and presents performance curves obtained with this implementation of the A* algorithm for several codes.
Fast ITTBC using pattern code on subband segmentation
NASA Astrophysics Data System (ADS)
Koh, Sung S.; Kim, Hanchil; Lee, Kooyoung; Kim, Hongbin; Jeong, Hun; Cho, Gangseok; Kim, Chunghwa
2000-06-01
Iterated Transformation Theory-Based Coding suffers from very high computational complexity in encoding phase. This is due to its exhaustive search. In this paper, our proposed image coding algorithm preprocess an original image to subband segmentation image by wavelet transform before image coding to reduce encoding complexity. A similar block is searched by using the 24 block pattern codes which are coded by the edge information in the image block on the domain pool of the subband segmentation. As a result, numerical data shows that the encoding time of the proposed coding method can be reduced to 98.82% of that of Joaquin's method, while the loss in quality relative to the Jacquin's is about 0.28 dB in PSNR, which is visually negligible.
Blocking and the detection of odor components in blends.
Hosler, J S; Smith, B H
2000-09-01
Recent studies of olfactory blocking have revealed that binary odorant mixtures are not always processed as though they give rise to mixture-unique configural properties. When animals are conditioned to one odorant (A) and then conditioned to a mixture of that odorant with a second (X), the ability to learn or express the association of X with reinforcement appears to be reduced relative to animals that were not preconditioned to A. A recent model of odor-based response patterns in the insect antennal lobe predicts that the strength of the blocking effect will be related to the perceptual similarity between the two odorants, i.e. greater similarity should increase the blocking effect. Here, we test that model in the honeybee Apis mellifera by first establishing a generalization matrix for three odorants and then testing for blocking between all possible combinations of them. We confirm earlier findings demonstrating the occurrence of the blocking effect in olfactory learning of compound stimuli. We show that the occurrence and the strength of the blocking effect depend on the odorants used in the experiment. In addition, we find very good agreement between our results and the model, and less agreement between our results and an alternative model recently proposed to explain the effect.
Codestream-Based Identification of JPEG 2000 Images with Different Coding Parameters
NASA Astrophysics Data System (ADS)
Watanabe, Osamu; Fukuhara, Takahiro; Kiya, Hitoshi
A method of identifying JPEG 2000 images with different coding parameters, such as code-block sizes, quantization-step sizes, and resolution levels, is presented. It does not produce false-negative matches regardless of different coding parameters (compression rate, code-block size, and discrete wavelet transform (DWT) resolutions levels) or quantization step sizes. This feature is not provided by conventional methods. Moreover, the proposed approach is fast because it uses the number of zero-bit-planes that can be extracted from the JPEG 2000 codestream by only parsing the header information without embedded block coding with optimized truncation (EBCOT) decoding. The experimental results revealed the effectiveness of image identification based on the new method.
Chaki, Tomohiro; Sugino, Shigekazu; Janicki, Piotr K; Ishioka, Yoshiya; Hatakeyama, Yosuke; Hayase, Tomo; Kaneuchi-Yamashita, Miki; Kohri, Naonori; Yamakage, Michiaki
2016-01-01
Mixtures of various local anesthetics, such as lidocaine and ropivacaine, have been widely used. However, their efficacy and safety for scalp nerve blocks and local infiltration during awake craniotomy have not been fully elucidated. We prospectively investigated 53 patients who underwent awake craniotomy. Scalp block was performed for the blockade of the supraorbital, supratrochlear, zygomaticotemporal, auriculotemporal, greater occipital, and lesser occipital nerves with a mixture containing equal volumes of 2% lidocaine and 0.75% ropivacaine, including 5 μg/mL of epinephrine. Infiltration anesthesia was applied at the site of skin incision using the same mixture. The study outcomes included changes in heart rate and blood pressure after head pinning and skin incision, and incidence of severe pain on emergence from anesthesia. Total doses and plasma concentrations of lidocaine and ropivacaine were measured at different time points after performing the block. The heart rate and blood pressure after head pinning were marginally, but significantly, increased when compared with baseline values. There were no significant differences in heart rate and blood pressure before and after the skin incision. Nineteen percent of the patients (10/53) complained of incisional pain at emergence from anesthesia. The highest observed blood concentrations of lidocaine and ropivacaine were 1.9±0.9 and 1.1±0.4 μg/mL, respectively. No acute anesthetic toxicity symptom was observed. Scalp block with a mixture of lidocaine and ropivacaine seems to provide effective and safe anesthetic management in patients undergoing awake craniotomy.
Nakamura, Issei
2014-05-29
We studied the thermodynamic properties of ion solvation in polymer blends and block copolymer melts and developed a dipolar self-consistent field theory for polymer mixtures. Our theory accounts for the chain connectivity of polymerized monomers, the compressibility of the liquid mixtures under electrostriction, the permanent and induced dipole moments of monomers, and the resultant dielectric contrast among species. In our coarse-grained model, dipoles are attached to the monomers and allowed to rotate freely in response to electrostatic fields. We demonstrate that a strong electrostatic field near an ion reorganizes dipolar monomers, resulting in nonmonotonic changes in the volume fraction profile and the dielectric function of the polymers with respect to those of simple liquid mixtures. For the parameter sets used, the spatial variations near an ion can be in the range of 1 nm or larger, producing significant differences in the solvation energy among simple liquid mixtures, polymer blends, and block copolymers. The solvation energy of an ion depends substantially on the chain length in block copolymers; thus, our theory predicts the preferential solvation of ions arising from differences in chain length.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
Program structure-based blocking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.
2017-09-26
Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.
Perceptually-Based Adaptive JPEG Coding
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)
1996-01-01
An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
Gonzato, Carlo; Semsarilar, Mona; Jones, Elizabeth R; Li, Feng; Krooshof, Gerard J P; Wyman, Paul; Mykhaylyk, Oleksandr O; Tuinier, Remco; Armes, Steven P
2014-08-06
Block copolymer self-assembly is normally conducted via post-polymerization processing at high dilution. In the case of block copolymer vesicles (or "polymersomes"), this approach normally leads to relatively broad size distributions, which is problematic for many potential applications. Herein we report the rational synthesis of low-polydispersity diblock copolymer vesicles in concentrated solution via polymerization-induced self-assembly using reversible addition-fragmentation chain transfer (RAFT) polymerization of benzyl methacrylate. Our strategy utilizes a binary mixture of a relatively long and a relatively short poly(methacrylic acid) stabilizer block, which become preferentially expressed at the outer and inner poly(benzyl methacrylate) membrane surface, respectively. Dynamic light scattering was utilized to construct phase diagrams to identify suitable conditions for the synthesis of relatively small, low-polydispersity vesicles. Small-angle X-ray scattering (SAXS) was used to verify that this binary mixture approach produced vesicles with significantly narrower size distributions compared to conventional vesicles prepared using a single (short) stabilizer block. Calculations performed using self-consistent mean field theory (SCMFT) account for the preferred self-assembled structures of the block copolymer binary mixtures and are in reasonable agreement with experiment. Finally, both SAXS and SCMFT indicate a significant degree of solvent plasticization for the membrane-forming poly(benzyl methacrylate) chains.
NASA Technical Reports Server (NTRS)
Tsuchiya, T.; Murthy, S. N. B.
1982-01-01
A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.
Šmigovec Ljubič, Tina; Pahovnik, David; Žigon, Majda; Žagar, Ema
2012-01-01
The separation of a mixture of three poly(styrene-block-t-butyl methacrylate) copolymers (PS-b-PtBMA), consisting of polystyrene (PS) blocks of similar length and t-butyl methacrylate (PtBMA) blocks of different lengths, was performed using various chromatographic techniques, that is, a gradient liquid chromatography on reversed-phase (C18 and C8) and normal-phase columns, a liquid chromatography under critical conditions for polystyrene as well as a fully automated two-dimensional liquid chromatography that separates block copolymers by chemical composition in the first dimension and by molar mass in the second dimension. The results show that a partial separation of the mixture of PS-b-PtBMA copolymers can be achieved only by gradient liquid chromatography on reversed-phase columns. The coelution of the two block copolymers is ascribed to a much shorter PtBMA block length, compared to the PS block, as well as a small difference in the length of the PtBMA block in two of these copolymers, which was confirmed by SEC-MALS and NMR spectroscopy. PMID:22489207
Least reliable bits coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Wagner, Paul
1992-01-01
LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Han, Yuchun; Xia, Lin; Zhu, Linyi; Zhang, Shusheng; Li, Zhibo; Wang, Yilin
2012-10-30
The association behaviors of single-chain surfactant dodecyltrimethylammonium bromide (DTAB) with double hydrophilic block co-polymers poly(ethylene glycol)-b-poly(sodium glutamate) (PEG(113)-PGlu(50) or PEG(113)-PGlu(100)) were investigated using isothermal titration microcalorimetry, cryogenic transmission electron microscopy, circular dichroism, ζ potential, and particle size measurements. The electrostatic interaction between DTAB and the oppositely charged carboxylate groups of PEG-PGlu induces the formation of super-amphiphiles, which further self-assemble into ordered aggregates. Dependent upon the charge ratios between DTAB and the glutamic acid residue of the co-polymer, the mixture solutions can change from transparent to opalescent without precipitation. Dependent upon the chain length of the PGlu block, the mixture of DTAB and PEG-PGlu diblocks can form two different aggregates at their corresponding electroneutral point. Spherical and rod-like aggregates are formed in the PEG(113)-PGlu(50)/DTAB mixture, while the vesicular aggregates are observed in the PEG(113)-PGlu(100)/DTAB mixture solution. Because the PEG(113)-PGlu(100)/DTAB super-amphiphile has more hydrophobic components than that of the PEG(113)-PGlu(50)/DTAB super-amphiphile, the former prefers forming the ordered aggregates with higher curvature, such as spherical and rod aggregates, but the latter prefers forming vesicular aggregates with lower curvature.
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao
1991-01-01
Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Selective encryption for H.264/AVC video coding
NASA Astrophysics Data System (ADS)
Shi, Tuo; King, Brian; Salama, Paul
2006-02-01
Due to the ease with which digital data can be manipulated and due to the ongoing advancements that have brought us closer to pervasive computing, the secure delivery of video and images has become a challenging problem. Despite the advantages and opportunities that digital video provide, illegal copying and distribution as well as plagiarism of digital audio, images, and video is still ongoing. In this paper we describe two techniques for securing H.264 coded video streams. The first technique, SEH264Algorithm1, groups the data into the following blocks of data: (1) a block that contains the sequence parameter set and the picture parameter set, (2) a block containing a compressed intra coded frame, (3) a block containing the slice header of a P slice, all the headers of the macroblock within the same P slice, and all the luma and chroma DC coefficients belonging to the all the macroblocks within the same slice, (4) a block containing all the ac coefficients, and (5) a block containing all the motion vectors. The first three are encrypted whereas the last two are not. The second method, SEH264Algorithm2, relies on the use of multiple slices per coded frame. The algorithm searches the compressed video sequence for start codes (0x000001) and then encrypts the next N bits of data.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.
Short-Block Protograph-Based LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
The trellis complexity of convolutional codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Lin, W.
1995-01-01
It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.
An electrostatic Particle-In-Cell code on multi-block structured meshes
NASA Astrophysics Data System (ADS)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David
2017-12-01
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.
An electrostatic Particle-In-Cell code on multi-block structured meshes
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...
2017-09-14
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
An electrostatic Particle-In-Cell code on multi-block structured meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
Multi-stage decoding for multi-level block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1991-01-01
In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.
Experimental Investigations on Axially and Eccentrically Loaded Masonry Walls
NASA Astrophysics Data System (ADS)
Keshava, Mangala; Raghunath, Seshagiri Rao
2017-12-01
In India, un-reinforced masonry walls are often used as main structural components in load bearing structures. Indian code on masonry accounts the reduction in strength of walls by using stress reduction factors in its design philosophy. This code was introduced in 1987 and reaffirmed in 1995. The present study investigates the use of these factors for south Indian masonry. Also, with the gaining popularity in block work construction, the aim of this study was to find out the suitability of these factors given in the Indian code to block work masonry. Normally, the load carrying capacity of masonry walls can be assessed in three ways, namely, (1) tests on masonry constituents, (2) tests on masonry prisms and (3) tests on full-scale wall specimens. Tests on bricks/blocks, cement-sand mortar, brick/block masonry prisms and 14 full-scale brick/block masonry walls formed the experimental investigation. The behavior of the walls was investigated under varying slenderness and eccentricity ratios. Hollow concrete blocks normally used as in-fill masonry can be considered as load bearing elements as its load carrying capacity was found to be high when compared to conventional brick masonry. Higher slenderness and eccentricity ratios drastically reduced the strength capacity of south Indian brick masonry walls. The reduction in strength due to slenderness and eccentricity is presented in the form of stress reduction factors in the Indian code. These factors obtained through experiments on eccentrically loaded brick masonry walls was lower while that of brick/block masonry under axial loads was higher than the values indicated in the Indian code. Also the reduction in strength is different for brick and block work masonry thus indicating the need for separate stress reduction factors for these two masonry materials.
Coding tools investigation for next generation video coding based on HEVC
NASA Astrophysics Data System (ADS)
Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin
2015-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Variable Coded Modulation software simulation
NASA Astrophysics Data System (ADS)
Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise
This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.
Sinabung Volcanic Ash Utilization As The Additive for Paving Block Quality A and B
NASA Astrophysics Data System (ADS)
Sembiring, I. S.; Hastuty, I. P.
2017-03-01
Paving block is one of the building materials used as the top layer of the road structure besides asphalt and concrete. Paving block is made of mixed materials such as portland cement or other adhesive materials, water and aggregate. In this research, the material used as the additive of cement and concrete is volcanic ash from Mount Sinabung, it is based on the results of the material testing, Sinabung ash contains 74.3% silica (SiO2). The purpose of this research aims to analyze the behavior of the paving blocks quality A and B with and without a mixture of Sinabung ash, to analyze the workability of fresh concrete using Sinabung ash as an additive in concrete, and to compare the test results of paving blocks with and without using Sinabung ash. The samples that we made consist of four variations of the concrete mix to experiment a mixture of normal sample without additive, samples which are mixed with the addition of Sinabung ash 5%, 10%, 15%, 20% and 25% of the volume of concrete/m3. Each variation consists of 10 samples of the concrete with 28 days curing time period. We will do the compressive strength and water absorption test to the samples to determine whether the samples are in accordance with the type needed. According to the test result, paving blocks with Sinabung ash and curing time reach quality A at 0%, 5% and 10% mixture with the compressive strength of each 50.14 MPa, 46.20 MPa and 1.49Mpa, and reach quality B at 15%, 20 %,25% mixture with curing time and 0%, 5%, 10%, 15%, 20% and 25% mixture without curing time. According to the absorption values we got from the test which are 6.66%, 6.73%, 6.88%, 7.03%, 7.09% and 7.16%, the entire sample have average absorption exceeding SNI standardization which is above 6% and reach quality C. Based on compressive strength and absorption data obtained Sinabung ash can’t fully replace cement as the binder because of the low CaO content.
High Frequency Scattering Code in a Distributed Processing Environment
1991-06-01
Block 6. Author(s). Name(s) of person (s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major...use of auttomated analysis tools is indicated. One tool developed by Pacific-Sierra Re- 22 search Corporation and marketed by Intel Corporation for...XQ: EXECUTE CODE EN : END CODE This input deck differs from that in the manual because the "PP" option is disabled in the modified code. 45 A.3
USDA-ARS?s Scientific Manuscript database
We report on the effect of changing nanoscale morphology on pervaporation of ethanol/water mixtures through block copolymer membranes. Experiments were conducted using polystyrene-b-polybutadiene-b-polystyrene (SBS) copolymers with polybutadiene (PB) as the ethanol transporting block, using an 8 wt%...
Comparison of heavy-ion transport simulations: Collision integral in a box
NASA Astrophysics Data System (ADS)
Zhang, Ying-Xun; Wang, Yong-Jia; Colonna, Maria; Danielewicz, Pawel; Ono, Akira; Tsang, Manyee Betty; Wolter, Hermann; Xu, Jun; Chen, Lie-Wen; Cozma, Dan; Feng, Zhao-Qing; Das Gupta, Subal; Ikeno, Natsumi; Ko, Che-Ming; Li, Bao-An; Li, Qing-Feng; Li, Zhu-Xia; Mallik, Swagata; Nara, Yasushi; Ogawa, Tatsuhiko; Ohnishi, Akira; Oliinychenko, Dmytro; Papa, Massimo; Petersen, Hannah; Su, Jun; Song, Taesoo; Weil, Janus; Wang, Ning; Zhang, Feng-Shou; Zhang, Zhen
2018-03-01
Simulations by transport codes are indispensable to extract valuable physical information from heavy-ion collisions. In order to understand the origins of discrepancies among different widely used transport codes, we compare 15 such codes under controlled conditions of a system confined to a box with periodic boundary, initialized with Fermi-Dirac distributions at saturation density and temperatures of either 0 or 5 MeV. In such calculations, one is able to check separately the different ingredients of a transport code. In this second publication of the code evaluation project, we only consider the two-body collision term; i.e., we perform cascade calculations. When the Pauli blocking is artificially suppressed, the collision rates are found to be consistent for most codes (to within 1 % or better) with analytical results, or completely controlled results of a basic cascade code. In orderto reach that goal, it was necessary to eliminate correlations within the same pair of colliding particles that can be present depending on the adopted collision prescription. In calculations with active Pauli blocking, the blocking probability was found to deviate from the expected reference values. The reason is found in substantial phase-space fluctuations and smearing tied to numerical algorithms and model assumptions in the representation of phase space. This results in the reduction of the blocking probability in most transport codes, so that the simulated system gradually evolves away from the Fermi-Dirac toward a Boltzmann distribution. Since the numerical fluctuations are weaker in the Boltzmann-Uehling-Uhlenbeck codes, the Fermi-Dirac statistics is maintained there for a longer time than in the quantum molecular dynamics codes. As a result of this investigation, we are able to make judgements about the most effective strategies in transport simulations for determining the collision probabilities and the Pauli blocking. Investigation in a similar vein of other ingredients in transport calculations, like the mean-field propagation or the production of nucleon resonances and mesons, will be discussed in the future publications.
Structure, rheology and shear alignment of Pluronic block copolymer mixtures.
Newby, Gemma E; Hamley, Ian W; King, Stephen M; Martin, Christopher M; Terrill, Nicholas J
2009-01-01
The structure and flow behaviour of binary mixtures of Pluronic block copolymers P85 and P123 is investigated by small-angle scattering, rheometry and mobility tests. Micelle dimensions are probed by dynamic light scattering. The micelle hydrodynamic radius for the 50/50 mixture is larger than that for either P85 or P123 alone, due to the formation of mixed micelles with a higher association number. The phase diagram for 50/50 mixtures contains regions of cubic and hexagonal phases similar to those for the parent homopolymers, however the region of stability of the cubic phase is enhanced at low temperature and concentrations above 40 wt%. This is ascribed to favourable packing of the mixed micelles containing core blocks with two different chain lengths, but similar corona chain lengths. The shear flow alignment of face-centred cubic and hexagonal phases is probed by in situ small-angle X-ray or neutron scattering with simultaneous rheology. The hexagonal phase can be aligned using steady shear in a Couette geometry, however the high modulus cubic phase cannot be aligned well in this way. This requires the application of oscillatory shear or compression.
Movahed, Mohammad-Reza; Hashemzadeh, Mehrtash; Jamal, M Mazen
2005-10-01
Diabetes mellitus (DM) is a major risk for cardiovascular disease and mortality. There is some evidence that third-degree atrioventricular (AV) block occurs more commonly in patients with DM. In this study, we evaluated any possible association between DM and third-degree AV block using International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in a very large inpatient database. We used patient treatment files containing discharge diagnoses using ICD-9 codes of inpatient treatment from all Veterans Health Administration hospitals. The cohort was stratified using the ICD-9-CM code for DM (n = 293,124), a control group with hypertension but no DM (n = 552,623), and the ICD-9 code for third-degree AV block (426.0) and smoking (305.1, V15.82). We performed multivariate analysis adjusting for coronary artery disease, congestive heart failure, smoking, and hyperlipidemia. Continuous and binary variables were analyzed using chi2 and Fisher exact tests. Third-degree AV block diagnosis was present in 3,240 of DM patients (1.1%) vs 3,367 patients (0.6%) in the control group. Using multivariate analysis, DM remained strongly associated with third-degree AV block (odds ratio, 3.1; 95% confidential interval, 3.0 to 3.3; p < 0.0001). Third-degree AV block occurs significantly more in patients with DM. This finding may, in part, explain the high cardiovascular mortality in DM patients.
FANS-3D Users Guide (ESTEP Project ER 201031)
2016-08-01
governing laminar and turbulent flows in body-fitted curvilinear grids. The code employs multi-block overset ( chimera ) grids, including fully matched...governing incompressible flow in body-fitted grids. The code allows for multi-block overset ( chimera ) grids, which can be fully matched, arbitrarily...interested reader may consult the Chimera Overset Structured Mesh-Interpolation Code (COSMIC) Users’ Manual (Chen, 2009). The input file used for
The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bihn T. Pham; Jeffrey J. Einerson
2010-06-01
This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automatedmore » processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.« less
Mixture and odorant processing in the olfactory systems of insects: a comparative perspective.
Clifford, Marie R; Riffell, Jeffrey A
2013-11-01
Natural olfactory stimuli are often complex mixtures of volatiles, of which the identities and ratios of constituents are important for odor-mediated behaviors. Despite this importance, the mechanism by which the olfactory system processes this complex information remains an area of active study. In this review, we describe recent progress in how odorants and mixtures are processed in the brain of insects. We use a comparative approach toward contrasting olfactory coding and the behavioral efficacy of mixtures in different insect species, and organize these topics around four sections: (1) Examples of the behavioral efficacy of odor mixtures and the olfactory environment; (2) mixture processing in the periphery; (3) mixture coding in the antennal lobe; and (4) evolutionary implications and adaptations for olfactory processing. We also include pertinent background information about the processing of individual odorants and comparative differences in wiring and anatomy, as these topics have been richly investigated and inform the processing of mixtures in the insect olfactory system. Finally, we describe exciting studies that have begun to elucidate the role of the processing of complex olfactory information in evolution and speciation.
Zero-block mode decision algorithm for H.264/AVC.
Lee, Yu-Ming; Lin, Yinyi
2009-03-01
In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.
Improved lossless intra coding for H.264/MPEG-4 AVC.
Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J
2006-09-01
A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
Extracellular Matrix Induced Integrin Signal Transduction and Breast Cancer Invasion.
1995-10-01
Metalloproteinase, breast, mammary, integrin, collagen, RGDS, matrilysin 49 breast cancer 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY...Organization Name(s) and Address(es). Self-explanatory. Block 16. Price Code. Enter appropriate price Block 8. Performinc!_rcanization Report code...areas of necrosis in the center of the tumor; a portion of the mammary gland can be seen in the lower right . The matrilysin in situ showed
Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation
NASA Astrophysics Data System (ADS)
Pinilla, Samuel; Poveda, Juan; Arguello, Henry
2018-03-01
Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Chavis, Michelle A.; Smilgies, Detlef-M.; Wiesner, Ulrich B.; Ober, Christopher K.
2015-01-01
Thin films of block copolymers are extremely attractive for nanofabrication because of their ability to form uniform and periodic nanoscale structures by microphase separation. One shortcoming of this approach is that to date the design of a desired equilibrium structure requires synthesis of a block copolymer de novo within the corresponding volume ratio of the blocks. In this work, we investigated solvent vapor annealing in supported thin films of poly(2-hydroxyethyl methacrylate)-block-poly(methyl methacrylate) [PHEMA-b-PMMA] by means of grazing incidence small angle X–ray scattering (GISAXS). A spin-coated thin film of lamellar block copolymer was solvent vapor annealed to induce microphase separation and improve the long-range order of the self-assembled pattern. Annealing in a mixture of solvent vapors using a controlled volume ratio of solvents (methanol, MeOH, and tetrahydrofuran, THF), which are chosen to be preferential for each block, enabled selective formation of ordered lamellae, gyroid, hexagonal or spherical morphologies from a single block copolymer with a fixed volume fraction. The selected microstructure was then kinetically trapped in the dry film by rapid drying. To our knowledge, this paper describes the first reported case where in-situ methods are used to study the transition of block copolymer films from one initial disordered morphology to four different ordered morphologies, covering much of the theoretical diblock copolymer phase diagram. PMID:26819574
Execution of a parallel edge-based Navier-Stokes solver on commodity graphics processor units
NASA Astrophysics Data System (ADS)
Corral, Roque; Gisbert, Fernando; Pueblas, Jesus
2017-02-01
The implementation of an edge-based three-dimensional Reynolds Average Navier-Stokes solver for unstructured grids able to run on multiple graphics processing units (GPUs) is presented. Loops over edges, which are the most time-consuming part of the solver, have been written to exploit the massively parallel capabilities of GPUs. Non-blocking communications between parallel processes and between the GPU and the central processor unit (CPU) have been used to enhance code scalability. The code is written using a mixture of C++ and OpenCL, to allow the execution of the source code on GPUs. The Message Passage Interface (MPI) library is used to allow the parallel execution of the solver on multiple GPUs. A comparative study of the solver parallel performance is carried out using a cluster of CPUs and another of GPUs. It is shown that a single GPU is up to 64 times faster than a single CPU core. The parallel scalability of the solver is mainly degraded due to the loss of computing efficiency of the GPU when the size of the case decreases. However, for large enough grid sizes, the scalability is strongly improved. A cluster featuring commodity GPUs and a high bandwidth network is ten times less costly and consumes 33% less energy than a CPU-based cluster with an equivalent computational power.
NASA Astrophysics Data System (ADS)
Kim, Sehee; Char, Kookheon; Sohn, Byeong-Hyeok
2010-03-01
Diblock copolymers consisting of two immiscible polymer blocks covalently bonded together form various self-assembled nanostructures such as spheres, cylinders, and lamellae in bulk phase. In a selective solvent, however, they assemble into micelles with soluble corona brushes and immiscible cores. Both polystyrene-poly(4-vinylpyridine) (PS-b-P4VP) and polystyrene-poly(2-vinylpyridine) (PS-b-P2VP) diblock copolymers form micelles with PS coronas and P4VP or P2VP cores in a PS selective solvent (toluene). By varying the mixture ratio between PS-b-P4VP and PS-b-P2VP, composite films based on the micellar mixtures of PS-b-P4VP and PS-b-P2VP were obtained by spin-coating, followed by the solvent annealing with tetrahydrofuran (THF) vapor. Since THF is a solvent for both PS and P2VP blocks and, at the same time, a non-solvent for the P4VP block, PS-P2VP micelles transformed to lamellar multilayers while PS-P4VP micelles remained intact during the THF annealing. The spontaneous evolution of nanostructure in composite films consisting of lamellae layers with BCP micelles were investigated in detail by cross-sectional TEM and AFM.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.
1998-01-01
It is well known that the BER performance of a parallel concatenated turbo-code improves roughly as 1/N, where N is the information block length. However, it has been observed by Benedetto and Montorsi that for most parallel concatenated turbo-codes, the FER performance does not improve monotonically with N. In this report, we study the FER of turbo-codes, and the effects of their concatenation with an outer code. Two methods of concatenation are investigated: across several frames and within each frame. Some asymmetric codes are shown to have excellent FER performance with an information block length of 16384. We also show that the proposed outer coding schemes can improve the BER performance as well by eliminating pathological frames generated by the iterative MAP decoding process.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Daitou, Fumikazu; Maruta, Michito; Kawachi, Giichiro; Tsuru, Kanji; Matsuya, Shigeki; Terada, Yoshihiro; Ishikawa, Kunio
2010-05-01
In this study, we investigated a novel method for fabrication of carbonate apatite block without ionic movement between precursor and solution by using precursor that includes all constituent ions of carbonate apatite. A powder mixture prepared from dicalcium phosphate anhydrous and calcite at appropriate Ca/P ratios (1.5, 1.67, and 1.8) was used as starting material. For preparation of specimens, the slurry made from the powder mixture and distilled water was packed in a split stainless steel mold and heat - treated, ranging from 60 degrees C to 100 degrees C up to 48 hours at 100% humidity. It appeared that carbonate apatite could be obtained above 70 degrees C and monophasic carbonate apatite could be obtained from the powder mixture at Ca/P ratio of 1.67. Carbonate content of the specimen was about 5-7%. Diametral tensile strength of the carbonate apatite blocks slightly decreased with increasing treatment temperature. The decrease in diametral tensile strength is thought to be related to the crystal size of the carbonate apatite formed.
Dynamic Detection of Malicious Code in COTS Software
2000-04-01
run the following documented hostile applets or ActiveX of these tools work only on mobile code (Java, ActiveX , controls: 16-11 Hostile Applets Tiny...Killer App Exploder Runner ActiveX Check Spy eSafe Protect Desktop 9/9 blocked NB B NB 13/17 blocked NB Surfinshield Online 9/9 blocked NB B B 13/17...Exploder is an ActiveX control top (@). that performs a clean shutdown of your computer. The interface is attractive, although rather complex, as McLain’s
Power optimization of wireless media systems with space-time block codes.
Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran
2004-07-01
We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes into consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and video source models, Rayleigh fading channels along with the Bernoulli/Gilbert-Elliott loss models, and space-time block codes.
USDA-ARS?s Scientific Manuscript database
Burkholderia sacchari DSM 17165 was used as a biocatalyst for the production of poly-3-hydroxybutyrate-co-3-hydroxyvalerate block copolymers (Poly-3HB-block-3HV) from xylose and levulinic acid. Among the carbon source mixtures, levulinic acid was preferred and was consumed early in the fermentations...
van Kuringen, Huub P C; de la Rosa, Victor R; Fijten, Martin W M; Heuts, Johan P A; Hoogenboom, Richard
2012-05-14
The ability of merging the properties of poly(2-oxazoline)s and poly(ethylene imine) is of high interest for various biomedical applications, including gene delivery, biosensors, and switchable surfaces and nanoparticles. In the present research, a methodology for the controlled and selective hydrolysis of (co)poly(2-oxazoline)s is developed in an ethanol-water solvent mixture, opening the path toward a wide range of block poly(2-oxazoline-co-ethylene imine) (POx-PEI) copolymers with tunable properties. The unexpected influence of the selected ethanol-water binary solvent mixture on the hydrolysis kinetics and selectivity is highlighted in the pursue of well-defined POx-PEI block copolymers. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Shifted Block Lanczos Algorithm 1: The Block Recurrence
NASA Technical Reports Server (NTRS)
Grimes, Roger G.; Lewis, John G.; Simon, Horst D.
1990-01-01
In this paper we describe a block Lanczos algorithm that is used as the key building block of a software package for the extraction of eigenvalues and eigenvectors of large sparse symmetric generalized eigenproblems. The software package comprises: a version of the block Lanczos algorithm specialized for spectrally transformed eigenproblems; an adaptive strategy for choosing shifts, and efficient codes for factoring large sparse symmetric indefinite matrices. This paper describes the algorithmic details of our block Lanczos recurrence. This uses a novel combination of block generalizations of several features that have only been investigated independently in the past. In particular new forms of partial reorthogonalization, selective reorthogonalization and local reorthogonalization are used, as is a new algorithm for obtaining the M-orthogonal factorization of a matrix. The heuristic shifting strategy, the integration with sparse linear equation solvers and numerical experience with the code are described in a companion paper.
SINDA/FLUINT Stratified Tank Modeling for Cryrogenic Propellant Tanks
NASA Technical Reports Server (NTRS)
Sakowski, Barbara
2014-01-01
A general purpose SINDA/FLUINT (S/F) stratified tank model was created to simulate self-pressurization and axial jet TVS; Stratified layers in the vapor and liquid are modeled using S/F lumps.; The stratified tank model was constructed to permit incorporating the following additional features:, Multiple or singular lumps in the liquid and vapor regions of the tank, Real gases (also mixtures) and compressible liquids, Venting, pressurizing, and draining, Condensation and evaporation/boiling, Wall heat transfer, Elliptical, cylindrical, and spherical tank geometries; Extensive user logic is used to allow detailed tailoring - Don't have to rebuilt everything from scratch!!; Most code input for a specific case is done through the Registers Data Block:, Lump volumes are determined through user input:; Geometric tank dimensions (height, width, etc); Liquid level could be input as either a volume percentage of fill level or actual liquid level height
Sinda/Fluint Stratfied Tank Modeling
NASA Technical Reports Server (NTRS)
Sakowski, Barbara A.
2014-01-01
A general purpose SINDA/FLUINT (S/F) stratified tank model was created and used to simulate the Ksite1 LH2 liquid self-pressurization tests as well as axial jet mixing within the liquid region of the tank. The S/F model employed the use of stratified layers, i.e. S/F lumps, in the vapor ullage as well as in the liquid region. The model was constructed to analyze a general purpose stratified tank that could incorporate the following features: Multiple or singular lumps in the liquid and vapor regions of the tank, Real gases (also mixtures) and compressible liquids, Venting, pressurizing, and draining, Condensation and evaporation/boiling, Wall heat transfer, Elliptical, cylindrical, and spherical tank geometries. Extensive user logic was used to allow for tailoring of the above features to specific cases. Most of the code input for a specific case could be done through the Registers Data Block.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
49 CFR 387.301 - Surety bond, certificate of insurance, or other securities.
Code of Federal Regulations, 2010 CFR
2010-10-01
... in bulk. Cement, building blocks. Charcoal. Chemical fertilizer. Cinder blocks. Cinders, coal. Coal. Coke. Commercial fertilizer. Concrete materials and added mixtures. Corn cobs. Cottonseed hulls... nitrate of soda. Anhydrous ammonia—used as a fertilizer only. Ashes, wood or coal. Bituminous concrete...
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
Acoustic Behavior of Hollow Blocks and Bricks Made of Concrete Doped with Waste-Tire Rubber.
Fraile-Garcia, Esteban; Ferreiro-Cabello, Javier; Defez, Beatriz; Peris-Fajanes, Guillermo
2016-11-26
In this paper, we investigate the acoustic behaviour of building elements made of concrete doped with waste-tire rubber. Three different mixtures were created, with 0%, 10%, and 20% rubber in their composition. Bricks, lattice joists, and hollow blocks were manufactured with each mixture, and three different cells were built and tested against aerial and impact noise. The values of the global acoustic isolation and the reduction of the sound pressure level of impacts were measured. Results proved that highly doped elements are an excellent option to isolate low frequency sounds, whereas intermediate and standard elements constitute a most interesting option to block middle and high frequency sounds. In both cases, the considerable amount of waste-tire rubber recycled could justify the employment of the doped materials for the sake of the environment.
Acoustic Behavior of Hollow Blocks and Bricks Made of Concrete Doped with Waste-Tire Rubber
Fraile-Garcia, Esteban; Ferreiro-Cabello, Javier; Defez, Beatriz; Peris-Fajanes, Guillermo
2016-01-01
In this paper, we investigate the acoustic behaviour of building elements made of concrete doped with waste-tire rubber. Three different mixtures were created, with 0%, 10%, and 20% rubber in their composition. Bricks, lattice joists, and hollow blocks were manufactured with each mixture, and three different cells were built and tested against aerial and impact noise. The values of the global acoustic isolation and the reduction of the sound pressure level of impacts were measured. Results proved that highly doped elements are an excellent option to isolate low frequency sounds, whereas intermediate and standard elements constitute a most interesting option to block middle and high frequency sounds. In both cases, the considerable amount of waste-tire rubber recycled could justify the employment of the doped materials for the sake of the environment. PMID:28774084
Solubility modeling of refrigerant/lubricant mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels, H.H.; Sienel, T.H.
1996-12-31
A general model for predicting the solubility properties of refrigerant/lubricant mixtures has been developed based on applicable theory for the excess Gibbs energy of non-ideal solutions. In our approach, flexible thermodynamic forms are chosen to describe the properties of both the gas and liquid phases of refrigerant/lubricant mixtures. After an extensive study of models for describing non-ideal liquid effects, the Wohl-suffix equations, which have been extensively utilized in the analysis of hydrocarbon mixtures, have been developed into a general form applicable to mixtures where one component is a POE lubricant. In the present study we have analyzed several POEs wheremore » structural and thermophysical property data were available. Data were also collected from several sources on the solubility of refrigerant/lubricant binary pairs. We have developed a computer code (NISC), based on the Wohl model, that predicts dew point or bubble point conditions over a wide range of composition and temperature. Our present analysis covers mixtures containing up to three refrigerant molecules and one lubricant. The present code can be used to analyze the properties of R-410a and R-407c in mixtures with a POE lubricant. Comparisons with other models, such as the Wilson or modified Wilson equations, indicate that the Wohl-suffix equations yield more reliable predictions for HFC/POE mixtures.« less
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
There is no MacWilliams identity for convolutional codes. [transmission gain comparison
NASA Technical Reports Server (NTRS)
Shearer, J. B.; Mceliece, R. J.
1977-01-01
An example is provided of two convolutional codes that have the same transmission gain but whose dual codes do not. This shows that no analog of the MacWilliams identity for block codes can exist relating the transmission gains of a convolutional code and its dual.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.
47 CFR 52.20 - Thousands-block number pooling.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Thousands-block number pooling. 52.20 Section... (CONTINUED) NUMBERING Number Portability § 52.20 Thousands-block number pooling. (a) Definition. Thousands-block number pooling is a process by which the 10,000 numbers in a central office code (NXX) are...
On decoding of multi-level MPSK modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Gupta, Alok Kumar
1990-01-01
The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.
Abo, Takayuki; Hilberer, Allison; Behle-Wagner, Christine; Watanabe, Mika; Cameron, David; Kirst, Annette; Nukada, Yuko; Yuki, Takuo; Araki, Daisuke; Sakaguchi, Hitoshi; Itagaki, Hiroshi
2018-04-01
The Short Time Exposure (STE) test method is an alternative method for assessing eye irritation potential using Statens Seruminstitut Rabbit Cornea cells and has been adopted as test guideline 491 by the Organisation for Economic Co-operation and Development. Its good predictive performance in identifying the Globally Harmonized System (GHS) No Category (NC) or Irritant Category has been demonstrated in evaluations of water-soluble substances, oil-soluble substances, and water-soluble mixtures. However, the predictive performance for oil-soluble mixtures was not evaluated. Twenty-four oil-soluble mixtures were evaluated using the STE test method. The GHS NC or Irritant Category of 22 oil-soluble mixtures were consistent with that of a Reconstructed human Cornea-like Epithelium (RhCE) test method. Inter-laboratory reproducibility was then confirmed using 20 water- and oil-soluble mixtures blind-coded. The concordance in GHS NC or Irritant Category among four laboratories was 90%-100%. In conclusion, the concordance in comparison with the results of RhCE test method using 24 oil-soluble mixtures and inter-laboratory reproducibility using 20 water- and oil-soluble mixtures blind-coded were good, indicating that the STE test method is a suitable alternative for predicting the eye irritation potential of both substances and mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Zhu, Yunqing; Romain, Charles; Williams, Charlotte K
2015-09-30
Selective catalysis is used to prepare block copolyesters by combining ring-opening polymerization of lactones and ring-opening copolymerization of epoxides/anhydrides. By using a dizinc complex with mixtures of up to three different monomers and controlling the chemistry of the Zn-O(polymer chain) it is possible to select for a particular polymerization route and thereby control the composition of block copolyesters.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Verification of combined thermal-hydraulic and heat conduction analysis code FLOWNET/TRUMP
NASA Astrophysics Data System (ADS)
Maruyama, Soh; Fujimoto, Nozomu; Kiso, Yoshihiro; Murakami, Tomoyuki; Sudo, Yukio
1988-09-01
This report presents the verification results of the combined thermal-hydraulic and heat conduction analysis code, FLOWNET/TRUMP which has been utilized for the core thermal hydraulic design, especially for the analysis of flow distribution among fuel block coolant channels, the determination of thermal boundary conditions for fuel block stress analysis and the estimation of fuel temperature in the case of fuel block coolant channel blockage accident in the design of the High Temperature Engineering Test Reactor(HTTR), which the Japan Atomic Energy Research Institute has been planning to construct in order to establish basic technologies for future advanced very high temperature gas-cooled reactors and to be served as an irradiation test reactor for promotion of innovative high temperature new frontier technologies. The verification of the code was done through the comparison between the analytical results and experimental results of the Helium Engineering Demonstration Loop Multi-channel Test Section(HENDEL T(sub 1-M)) with simulated fuel rods and fuel blocks.
NASA Astrophysics Data System (ADS)
Stefanski, Douglas Lawrence
A finite volume method for solving the Reynolds Averaged Navier-Stokes (RANS) equations on unstructured hybrid grids is presented. Capabilities for handling arbitrary mixtures of reactive gas species within the unstructured framework are developed. The modeling of turbulent effects is carried out via the 1998 Wilcox k -- o model. This unstructured solver is incorporated within VULCAN -- a multi-block structured grid code -- as part of a novel patching procedure in which non-matching interfaces between structured blocks are replaced by transitional unstructured grids. This approach provides a fully-conservative alternative to VULCAN's non-conservative patching methods for handling such interfaces. In addition, the further development of the standalone unstructured solver toward large-eddy simulation (LES) applications is also carried out. Dual time-stepping using a Crank-Nicholson formulation is added to recover time-accuracy, and modeling of sub-grid scale effects is incorporated to provide higher fidelity LES solutions for turbulent flows. A switch based on the work of Ducros, et al., is implemented to transition from a monotonicity-preserving flux scheme near shocks to a central-difference method in vorticity-dominated regions in order to better resolve small-scale turbulent structures. The updated unstructured solver is used to carry out large-eddy simulations of a supersonic constrained mixing layer.
2012-03-01
advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating
QX MAN: Q and X file manipulation
NASA Technical Reports Server (NTRS)
Krein, Mark A.
1992-01-01
QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.
A Radiation Solver for the National Combustion Code
NASA Technical Reports Server (NTRS)
Sockol, Peter M.
2015-01-01
A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
An In vitro evaluation of the reliability of QR code denture labeling technique.
Poovannan, Sindhu; Jain, Ashish R; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R
2016-01-01
Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method.
Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G
2011-10-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.
Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.
2011-01-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Self-recovery reversible image watermarking algorithm
Sun, He; Gao, Shangbing; Jin, Shenghua
2018-01-01
The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528
DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.
Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R
2015-01-01
Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAGEE,GLEN I.
Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flightmore » modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-10-01
Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
Ion Conduction in Perfectly Aligned Block Copolymer-Ionic Liquid Mixtures
NASA Astrophysics Data System (ADS)
Choi, Jae-Hong; Elabd, Yossef A.; Winey, Karen I.
2011-03-01
Our earlier work to correlate the transport measurements in diblock copolymer-ionic liquid mixtures was limited by our bulk samples that have only partial alignment. Here, thin films with perfect alignment of lamellar microdomains from mixtures of a poly(methyl methacrylate- b -styrene) diblock copolymer and an ionic liquid, 1-ethyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imide, have been studied. The morphologies will be characterized by cross-sectional transmission electron microscopy. Ion conduction will be presented within and through the thin film.
Code-Switching and Competition: An Examination of a Situational Response
ERIC Educational Resources Information Center
Bernstein, Eve; Herman, Ariela
2014-01-01
Code switching is primarily a linguistic term that refers to the use of two or more languages within the same conversation, or same sentence, to convey a single message. One field of linguistics, sociocultural linguistics, is broad and interdisciplinary, a mixture of language, culture, and society. In sociocultural linguistics, the code, or…
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
The overall contractual effort and the theory and numerical solution for the Reacting and Multi-Phase (RAMP2) computer code are described. The code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. Fundamental equations for steady flow of reacting gas-particle mixtures, method of characteristics, mesh point construction, and numerical integration of the conservation equations are considered herein.
Numerical study of supersonic combustors by multi-block grids with mismatched interfaces
NASA Technical Reports Server (NTRS)
Moon, Young J.
1990-01-01
A three dimensional, finite rate chemistry, Navier-Stokes code was extended to a multi-block code with mismatched interface for practical calculations of supersonic combustors. To ensure global conservation, a conservative algorithm was used for the treatment of mismatched interfaces. The extended code was checked against one test case, i.e., a generic supersonic combustor with transverse fuel injection, examining solution accuracy, convergence, and local mass flux error. After testing, the code was used to simulate the chemically reacting flow fields in a scramjet combustor with parallel fuel injectors (unswept and swept ramps). Computational results were compared with experimental shadowgraph and pressure measurements. Fuel-air mixing characteristics of the unswept and swept ramps were compared and investigated.
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
NASA Technical Reports Server (NTRS)
Benedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.
1998-01-01
Soft-input soft-output building blocks (modules) are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes.
Neural Coding Mechanisms in Gustation.
1980-09-15
world is composed of four primary tastes ( sweet , sour, salty , and bitter), and that each of these is carried by a separate and private neural line, thus...ted sweet -sour- salty -bitter types. The mathematical method of analysis was hierarchical cluster analysis based on the responses of many neurons (20 to...block number) Taste Neural coding Neural organization Stimulus organization Olfaction AB TRACT M~ea -i .rvm~ .1* N necffas and idmatity by block mmnbwc
NASA Technical Reports Server (NTRS)
Cannizzaro, Frank E.; Ash, Robert L.
1992-01-01
A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.
Independent Assessment Plan: LAV-25
1989-06-27
Pages. Enter the total Block 7. Performing Organization Name(s) and number of pages. Address(es. Self -explanatory. Block 16. Price Code, Enter...organization Blocks 17. - 19. Security Classifications. performing the report. Self -explanatory. Enter U.S. Security Classification in accordance with U.S...Security Block 9. S oonsorina/Monitoring Acenc Regulations (i.e., UNCLASSIFIED). If form .Names(s) and Address(es). Self -explanatory. contains classified
An In vitro evaluation of the reliability of QR code denture labeling technique
Poovannan, Sindhu; Jain, Ashish R.; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R.
2016-01-01
Statement of Problem: Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. Aim: This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. Materials and Methods: This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. Results: The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Conclusion: Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method. PMID:28123284
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
Araldite as an Embedding Medium for Electron Microscopy
Glauert, Audrey M.; Glauert, R. H.
1958-01-01
Epoxy resins are suitable media for embedding for electron microscopy, as they set uniformly with virtually no shrinkage. A mixture of araldite epoxy resins has been developed which is soluble in ethanol, and which yields a block of the required hardness for thin sectioning. The critical modifications to the conventional mixtures are the choice of a plasticized resin in conjunction with an aliphatic anhydride as the hardener. The hardness of the final block can be varied by incorporating additional plasticizer, and the rate of setting can be controlled by the use of an amine accelerator. The properties of the araldite mixture can be varied quite widely by adjusting the proportions of the various constituents. The procedure for embedding biological specimens is similar to that employed with methacrylates, although longer soaking times are recommended to ensure the complete penetration of the more viscous epoxy resin. An improvement in the preservation of the fine structure of a variety of specimens has already been reported, and a typical electron microgram illustrates the present paper. PMID:13525433
Accumulate-Repeat-Accumulate-Accumulate-Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy
2004-01-01
Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.
2014-05-01
function Value = Select_Element(Index,Signal) %# eml Value = Signal(Index); Code Listing 1 Code for Selector Block 12 | P a g e 4.3...code for the Simulink function shiftedSignal = fcn(signal,Shift) %# eml shiftedSignal = circshift(signal,Shift); Code Listing 2 Code for CircShift
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Comparison of Measured and Block Structured Simulations for the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Boelens, O. J.; Badcock, K. J.; Elmilgui, A.; Abdol-Hamid, K. S.; Massey, S. J.
2008-01-01
This article presents a comparison of the predictions of three RANS codes for flight conditions of the F-16XL aircraft which feature vortical flow. The three codes, ENSOLV, PMB and PAB3D, solve on structured multi-block grids. Flight data for comparison was available in the form of surface pressures, skin friction, boundary layer data and photographs of tufts. The three codes provided predictions which were consistent with expectations based on the turbulence modelling used, which was k- , k- with vortex corrections and an Algebraic Stress Model. The agreement with flight data was good, with the exception of the outer wing primary vortex strength. The confidence in the application of the CFD codes to complex fighter configurations increased significantly through this study.
NASA Astrophysics Data System (ADS)
Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja
2008-03-01
Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.
Optimum Cyclic Redundancy Codes for Noisy Channels
NASA Technical Reports Server (NTRS)
Posner, E. C.; Merkey, P.
1986-01-01
Capabilities and limitations of cyclic redundancy codes (CRC's) for detecting transmission errors in data sent over relatively noisy channels (e.g., voice-grade telephone lines or very-high-density storage media) discussed in 16-page report. Due to prevalent use of bytes in multiples of 8 bits data transmission, report primarily concerned with cases in which both block length and number of redundant bits (check bits for use in error detection) included in each block are multiples of 8 bits.
Survey of adaptive image coding techniques
NASA Technical Reports Server (NTRS)
Habibi, A.
1977-01-01
The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
Thermodynamic properties of UF sub 6 measured with a ballistic piston compressor
NASA Technical Reports Server (NTRS)
Sterritt, D. E.; Lalos, G. T.; Schneider, R. T.
1973-01-01
From experiments performed with a ballistic piston compressor, certain thermodynamic properties of uranium hexafluoride were investigated. Difficulties presented by the nonideal processes encountered in ballistic compressors are discussed and a computer code BCCC (Ballistic Compressor Computer Code) is developed to analyze the experimental data. The BCCC unfolds the thermodynamic properties of uranium hexafluoride from the helium-uranium hexafluoride mixture used as the test gas in the ballistic compressor. The thermodynamic properties deduced include the specific heat at constant volume, the ratio of specific heats for UF6, and the viscous coupling constant of helium-uranium hexafluoride mixtures.
Protecting quantum memories using coherent parity check codes
NASA Astrophysics Data System (ADS)
Roffe, Joschka; Headley, David; Chancellor, Nicholas; Horsman, Dominic; Kendon, Viv
2018-07-01
Coherent parity check (CPC) codes are a new framework for the construction of quantum error correction codes that encode multiple qubits per logical block. CPC codes have a canonical structure involving successive rounds of bit and phase parity checks, supplemented by cross-checks to fix the code distance. In this paper, we provide a detailed introduction to CPC codes using conventional quantum circuit notation. We demonstrate the implementation of a CPC code on real hardware, by designing a [[4, 2, 2
2017-01-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models. PMID:28742816
Hosoya, Haruo; Hyvärinen, Aapo
2017-07-01
Experimental studies have revealed evidence of both parts-based and holistic representations of objects and faces in the primate visual system. However, it is still a mystery how such seemingly contradictory types of processing can coexist within a single system. Here, we propose a novel theory called mixture of sparse coding models, inspired by the formation of category-specific subregions in the inferotemporal (IT) cortex. We developed a hierarchical network that constructed a mixture of two sparse coding submodels on top of a simple Gabor analysis. The submodels were each trained with face or non-face object images, which resulted in separate representations of facial parts and object parts. Importantly, evoked neural activities were modeled by Bayesian inference, which had a top-down explaining-away effect that enabled recognition of an individual part to depend strongly on the category of the whole input. We show that this explaining-away effect was indeed crucial for the units in the face submodel to exhibit significant selectivity to face images over object images in a similar way to actual face-selective neurons in the macaque IT cortex. Furthermore, the model explained, qualitatively and quantitatively, several tuning properties to facial features found in the middle patch of face processing in IT as documented by Freiwald, Tsao, and Livingstone (2009). These included, in particular, tuning to only a small number of facial features that were often related to geometrically large parts like face outline and hair, preference and anti-preference of extreme facial features (e.g., very large/small inter-eye distance), and reduction of the gain of feature tuning for partial face stimuli compared to whole face stimuli. Thus, we hypothesize that the coding principle of facial features in the middle patch of face processing in the macaque IT cortex may be closely related to mixture of sparse coding models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Xiao-Ying; Yao, Juan; He, Hua
2012-01-01
Extensive testing shows that the current version of the Chemical Mixture Methodology (CMM) is meeting its intended mission to provide conservative estimates of the health effects from exposure to airborne chemical mixtures. However, the current version of the CMM could benefit from several enhancements that are designed to improve its application of Health Code Numbers (HCNs) and employ weighting factors to reduce over conservatism.
Highly Selective Ionic Block Copolymer Membranes
2010-11-10
Multicomponent Diffusion and Sorption in an Ionic Polymer Membrane We recently measured the diffusion and sorption of methanol/water mixtures in Nafion (most...methanol feed concentration (17 M). Figure 1 shows one experiment where hydrated Nafion was exposed to a 2 M methanol/water liquid mixture resulting...copolymer membranes revealed several surprising results. Contrary to what has been observed in most ionic polymer membranes (e.g., Nafion ), the proton
Hydrodynamic modeling of petroleum reservoirs using simulator MUFITS
NASA Astrophysics Data System (ADS)
Afanasyev, Andrey
2015-04-01
MUFITS is new noncommercial software for numerical modeling of subsurface processes in various applications (www.mufits.imec.msu.ru). To this point, the simulator was used for modeling nonisothermal flows in geothermal reservoirs and for modeling underground carbon dioxide storage. In this work, we present recent extension of the code to petroleum reservoirs. The simulator can be applied in conventional black oil modeling, but it also utilizes a more complicated models for volatile oil and gas condensate reservoirs as well as for oil rim fields. We give a brief overview of the code by providing the description of internal representation of reservoir models, which are constructed of grid blocks, interfaces, stock tanks as well as of pipe segments and pipe junctions for modeling wells and surface networks. For conventional black oil approach, we present the simulation results for SPE comparative tests. We propose an accelerated compositional modeling method for sub- and supercritical flows subjected to various phase equilibria, particularly to three-phase equilibria of vapour-liquid-liquid type. The method is based on the calculation of the thermodynamic potential of reservoir fluid as a function of pressure, total enthalpy and total composition and storing its values as a spline table, which is used in hydrodynamic simulation for accelerated PVT properties prediction. We provide the description of both the spline calculation procedure and the flashing algorithm. We evaluate the thermodynamic potential for a mixture of two pseudo-components modeling the heavy and light hydrocarbon fractions. We develop a technique for converting black oil PVT tables to the potential, which can be used for in-situ hydrocarbons multiphase equilibria prediction under sub- and supercritical conditions, particularly, in gas condensate and volatile oil reservoirs. We simulate recovery from a reservoir subject to near-critical initial conditions for hydrocarbon mixture. We acknowledge financial support by a Grant from the president of the Russian Federation (SP-2222.2012.5) and by Russian foundation for basic research (RFBR 15-31-20585).
Parallel design of JPEG-LS encoder on graphics processing units
NASA Astrophysics Data System (ADS)
Duan, Hao; Fang, Yong; Huang, Bormin
2012-01-01
With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.
NASA Astrophysics Data System (ADS)
Bieringer, R.; Abetz, V.; Müller, A. H. E.
ABC triblock copolymers of the type poly[5-(N,N-dimethyl amino)isoprene]-block-polystyrene-block-poly(tert-butyl methacrylate) (AiST) were synthesized and hydrolyzed to yield poly[5-(N,N-dimethyl amino)isoprene]-block-polystyrene-block-poly(methacrylic acid) (AiSA) triblock copolyampholytes. Due to a complex solubility behavior the solution properties of these materials had to be investigated in THF/water solvent mixtures. Potentiometric titrations of AiSA triblock copolyampholytes showed two inflection points with the A block being deprotonated prior to the Ai hydrochloride block thus forming a polyzwitterion at the isoelectric point (iep). The aggregation behavior was studied by dynamic light scattering (DLS) and freeze-fracture/transmission electron microscopy (TEM). Large vesicular structures with almost pH-independent radii were observed.
Minimal Increase Network Coding for Dynamic Networks.
Zhang, Guoyin; Fan, Xu; Wu, Yanxia
2016-01-01
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.
Minimal Increase Network Coding for Dynamic Networks
Wu, Yanxia
2016-01-01
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery. PMID:26867211
Weighted bi-prediction for light field image coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2017-09-01
Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.
Kazachenko, Sergey; Giovinazzo, Mark; Hall, Kyle Wm; Cann, Natalie M
2015-09-15
A custom code for molecular dynamics simulations has been designed to run on CUDA-enabled NVIDIA graphics processing units (GPUs). The double-precision code simulates multicomponent fluids, with intramolecular and intermolecular forces, coarse-grained and atomistic models, holonomic constraints, Nosé-Hoover thermostats, and the generation of distribution functions. Algorithms to compute Lennard-Jones and Gay-Berne interactions, and the electrostatic force using Ewald summations, are discussed. A neighbor list is introduced to improve scaling with respect to system size. Three test systems are examined: SPC/E water; an n-hexane/2-propanol mixture; and a liquid crystal mesogen, 2-(4-butyloxyphenyl)-5-octyloxypyrimidine. Code performance is analyzed for each system. With one GPU, a 33-119 fold increase in performance is achieved compared with the serial code while the use of two GPUs leads to a 69-287 fold improvement and three GPUs yield a 101-377 fold speedup. © 2015 Wiley Periodicals, Inc.
Block Copolymer Membranes for Biofuel Purification
NASA Astrophysics Data System (ADS)
Evren Ozcam, Ali; Balsara, Nitash
2012-02-01
Purification of biofuels such as ethanol is a matter of considerable concern as they are produced in complex multicomponent fermentation broths. Our objective is to design pervaporation membranes for concentrating ethanol from dilute aqueous mixtures. Polystyrene-b-polydimethylsiloxane-b-polystyrene block copolymers were synthesized by anionic polymerization. The polydimethylsiloxane domains provide ethanol-transporting pathways, while the polystyrene domains provide structural integrity for the membrane. The morphology of the membranes is governed by the composition of the block copolymer while the size of the domains is governed by the molecular weight of the block copolymer. Pervaporation data as a function of these two parameters will be presented.
Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni
2014-01-01
Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.
Putting Priors in Mixture Density Mercer Kernels
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
PHISICS/RELAP5-3D RESULTS FOR EXERCISES II-1 AND II-2 OF THE OECD/NEA MHTGR-350 BENCHMARK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, Gerhard
2016-03-01
The Idaho National Laboratory (INL) Advanced Reactor Technologies (ART) High-Temperature Gas-Cooled Reactor (HTGR) Methods group currently leads the Modular High-Temperature Gas-Cooled Reactor (MHTGR) 350 benchmark. The benchmark consists of a set of lattice-depletion, steady-state, and transient problems that can be used by HTGR simulation groups to assess the performance of their code suites. The paper summarizes the results obtained for the first two transient exercises defined for Phase II of the benchmark. The Parallel and Highly Innovative Simulation for INL Code System (PHISICS), coupled with the INL system code RELAP5-3D, was used to generate the results for the Depressurized Conductionmore » Cooldown (DCC) (exercise II-1a) and Pressurized Conduction Cooldown (PCC) (exercise II-2) transients. These exercises require the time-dependent simulation of coupled neutronics and thermal-hydraulics phenomena, and utilize the steady-state solution previously obtained for exercise I-3 of Phase I. This paper also includes a comparison of the benchmark results obtained with a traditional system code “ring” model against a more detailed “block” model that include kinetics feedback on an individual block level and thermal feedbacks on a triangular sub-mesh. The higher spatial fidelity that can be obtained by the block model is illustrated with comparisons of the maximum fuel temperatures, especially in the case of natural convection conditions that dominate the DCC and PCC events. Differences up to 125 K (or 10%) were observed between the ring and block model predictions of the DCC transient, mostly due to the block model’s capability of tracking individual block decay powers and more detailed helium flow distributions. In general, the block model only required DCC and PCC calculation times twice as long as the ring models, and it therefore seems that the additional development and calculation time required for the block model could be worth the gain that can be obtained in the spatial resolution« less
Role of Oxidative Stress in Transformation Induced by Metal Mixture
Martín, Silva-Aguilar; Emilio, Rojas; Mahara, Valverde
2011-01-01
Metals are ubiquitous pollutants present as mixtures. In particular, mixture of arsenic-cadmium-lead is among the leading toxic agents detected in the environment. These metals have carcinogenic and cell-transforming potential. In this study, we used a two step cell transformation model, to determine the role of oxidative stress in transformation induced by a mixture of arsenic-cadmium-lead. Oxidative damage and antioxidant response were determined. Metal mixture treatment induces the increase of damage markers and the antioxidant response. Loss of cell viability and increased transforming potential were observed during the promotion phase. This finding correlated significantly with generation of reactive oxygen species. Cotreatment with N-acetyl-cysteine induces effect on the transforming capacity; while a diminution was found in initiation, in promotion phase a total block of the transforming capacity was observed. Our results suggest that oxidative stress generated by metal mixture plays an important role only in promotion phase promoting transforming capacity. PMID:22191014
40 CFR Table 9 to Part 455 - Group 2 Mixtures
Code of Federal Regulations, 2011 CFR
2011-07-01
... fatty acids of coconut oil (coded 079). 505200 Isoparaffinic hydrocarbons. 1 Shaughnessey codes and.... 016601 2 Dry ice. 022003 Coal tar. 025001 Coal tar neutral oils. 025003 Creosote oil (Note: Derived from... BNOA. 063501 Kerosene. 063502 Mineral oil—includes paraffin oil from 063503. 063503 Petroleum...
40 CFR Table 9 to Part 455 - Group 2 Mixtures
Code of Federal Regulations, 2012 CFR
2012-07-01
... the fatty acids of coconut oil (coded 079). 505200 Isoparaffinic hydrocarbons. 1 Shaughnessey codes... aromatic naphtha. 016601 2 Dry ice. 022003 Coal tar. 025001 Coal tar neutral oils. 025003 Creosote oil... acids. 055601 BNOA. 063501 Kerosene. 063502 Mineral oil—includes paraffin oil from 063503. 063503...
40 CFR Table 9 to Part 455 - Group 2 Mixtures
Code of Federal Regulations, 2013 CFR
2013-07-01
... the fatty acids of coconut oil (coded 079). 505200 Isoparaffinic hydrocarbons. 1 Shaughnessey codes... aromatic naphtha. 016601 2 Dry ice. 022003 Coal tar. 025001 Coal tar neutral oils. 025003 Creosote oil... acids. 055601 BNOA. 063501 Kerosene. 063502 Mineral oil—includes paraffin oil from 063503. 063503...
40 CFR Table 9 to Part 455 - Group 2 Mixtures
Code of Federal Regulations, 2014 CFR
2014-07-01
... the fatty acids of coconut oil (coded 079). 505200 Isoparaffinic hydrocarbons. 1 Shaughnessey codes... aromatic naphtha. 016601 2 Dry ice. 022003 Coal tar. 025001 Coal tar neutral oils. 025003 Creosote oil... acids. 055601 BNOA. 063501 Kerosene. 063502 Mineral oil—includes paraffin oil from 063503. 063503...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Holst, B.; Toth, G.; Sokolov, I. V.
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less
Dependency graph for code analysis on emerging architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shashkov, Mikhail Jurievich; Lipnikov, Konstantin
Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.
The Gift Code User Manual. Volume I. Introduction and Input Requirements
1975-07-01
REPORT & PERIOD COVERED ‘TII~ GIFT CODE USER MANUAL; VOLUME 1. INTRODUCTION AND INPUT REQUIREMENTS FINAL 6. PERFORMING ORG. REPORT NUMBER ?. AuTHOR(#) 8...reverua side if neceaeary and identify by block number] (k St) The GIFT code is a FORTRANcomputerprogram. The basic input to the GIFT ode is data called
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Design of convolutional tornado code
NASA Astrophysics Data System (ADS)
Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu
2017-09-01
As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.
Effect of the addition of rocuronium to 2% lignocaine in peribulbar block for cataract surgery.
Patil, Vishalakshi; Farooqy, Allauddin; Chaluvadi, Balaraju Thayappa; Rajashekhar, Vinayak; Malshetty, Ashwini
2017-01-01
Peribulbar anesthesia is associated with delayed orbital akinesia compared with retrobulbar anesthesia. To test the hypothesis that rocuronium added to a mixture of local anesthetics (LAs) could improve speed of onset of akinesia in peribulbar block (PB), we designed this study. This study examined the effects of adding rocuronium 5 mg to 2% lignocaine with adrenaline to note orbital and eyelid akinesia in patients undergoing cataract surgery. In a prospective, randomized, double-blind study, 100 patients were equally randomized to receive a mixture of 0.5 ml normal saline, 6 ml lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml (Group I), a mixture of rocuronium 0.5 ml (5 mg), 6 ml lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml (Group II). Orbital akinesia was assessed on a 0-8 score (0 = no movement, 8 = normal) at 2 min intervals for 10 min. Time to adequate anesthesia was also recorded. Results are presented as mean ± standard deviation. Rocuronium group demonstrated significantly better akinesia scores than control group at 2 min intervals post-PB (significant P value obtained). No significant complications were recorded. Rocuronium added to a mixture of LA improved the quality of akinesia in PB and reduced the need for supplementary injections. The addition of rocuronium 5 mg to a mixture of lidocaine 2% with adrenaline and hyaluronidase 50 IU/ml shortened the onset time of peribulbar anesthesia in patients undergoing cataract surgery without causing adverse effects.
Neighboring block based disparity vector derivation for multiview compatible 3D-AVC
NASA Astrophysics Data System (ADS)
Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta
2013-09-01
3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.
Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.
1980-01-01
Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.
A combinatorial code for pattern formation in Drosophila oogenesis.
Yakoby, Nir; Bristow, Christopher A; Gong, Danielle; Schafer, Xenia; Lembong, Jessica; Zartman, Jeremiah J; Halfon, Marc S; Schüpbach, Trudi; Shvartsman, Stanislav Y
2008-11-01
Two-dimensional patterning of the follicular epithelium in Drosophila oogenesis is required for the formation of three-dimensional eggshell structures. Our analysis of a large number of published gene expression patterns in the follicle cells suggests that they follow a simple combinatorial code based on six spatial building blocks and the operations of union, difference, intersection, and addition. The building blocks are related to the distribution of inductive signals, provided by the highly conserved epidermal growth factor receptor and bone morphogenetic protein signaling pathways. We demonstrate the validity of the code by testing it against a set of patterns obtained in a large-scale transcriptional profiling experiment. Using the proposed code, we distinguish 36 distinct patterns for 81 genes expressed in the follicular epithelium and characterize their joint dynamics over four stages of oogenesis. The proposed combinatorial framework allows systematic analysis of the diversity and dynamics of two-dimensional transcriptional patterns and guides future studies of gene regulation.
An installed nacelle design code using a multiblock Euler solver. Volume 2: User guide
NASA Technical Reports Server (NTRS)
Chen, H. C.
1992-01-01
This is a user manual for the general multiblock Euler design (GMBEDS) code. The code is for the design of a nacelle installed on a geometrically complex configuration such as a complete airplane with wing/body/nacelle/pylon. It consists of two major building blocks: a design module developed by LaRC using directive iterative surface curvature (DISC); and a general multiblock Euler (GMBE) flow solver. The flow field surrounding a complex configuration is divided into a number of topologically simple blocks to facilitate surface-fitted grid generation and improve flow solution efficiency. This user guide provides input data formats along with examples of input files and a Unix script for program execution in the UNICOS environment.
Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id
2014-09-30
Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less
Fluorescence Lifetime Study of Cyclodextrin Complexes of Substituted Naphthalenes.
1987-08-15
Spectroscopy iip 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse If necessary and identify by block number) FIELD GROUP SUB-GROUP fluorescence lifetime...measurements cyclodextrins spectroscopic techniques 19. TRACT (Continue on revere if necsary and identify by block number
Sanders, Elizabeth A.; Berninger, Virginia W.; Abbott, Robert D.
2017-01-01
Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in grades 4–9 (N=103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n=25), word reading and spelling (dyslexia, n=60), or oral and written language (OWL LD, n=18). That is, SLDs are defined on basis of cascading level of language impairment (subword, word, and syntax/text). A 5-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% variance. Orthographic word form coding uniquely predicted nearly every measure, whereas attention switching only uniquely predicted reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and OWL LD were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed. PMID:28199175
NASA Astrophysics Data System (ADS)
Cooksey, Tyler; Singh, Avantika; Mai Le, Kim; Wang, Shu; Kelley, Elizabeth; He, Lilin; Vajjala Kesava, Sameer; Gomez, Enrique; Kidd, Bryce; Madsen, Louis; Robertson, Megan
The self-assembly of block copolymers into micelles when introduced to selective solvents enables a wide array of applications, ranging from drug delivery to personal care products to nanoreactors. In order to probe the assembly and dynamics of micellar systems, the structural properties and solvent uptake of biocompatible poly(ethylene oxide-b- ɛ-caprolactone) (PEO-PCL) diblock copolymers in deuterated water (D2O) / tetrahydrofuran (THFd8) mixtures were investigated using small-angle neutron scattering in combination with nuclear magnetic resonance. PEO-PCL block copolymers, of varying molecular weight yet constant block ratio, formed spherical micelles through a wide range of solvent compositions. Varying the composition from 10 to 60 % by volume THFd8\\ in D2O / THFd8 mixtures was a means of varying the core-corona interfacial tension in the micelle system. An increase in THFd8 content in the bulk solvent increased the solvent uptake within the micelle core, which was comparable for the two series, irrespective of the polymer molecular weight. Differences in the behaviors of the micelle size parameters as the solvent composition varied originated from the differing trends in aggregation number for the two micelle series. Incorporation of the known unimer content determined from NMR allowed refinement of extracted micelle parameters.
Lightweight Ceramic Insulation
NASA Technical Reports Server (NTRS)
Wheeler, W. H.; Creedon, J. F.
1986-01-01
Fiber burnout process yields low densities. Low density attained by process of sacrificial burnout. Graphite or carbon fibers mixed into slurry of silica, alumina, and boron-compound fibers in amounts ranging from 25 to 75 percent of total fiber content by weight. Mixture formed into blocks and dried. Blocks placed in kiln and heated to 1,600 degrees F(870 degrees C) for several hours. Graphite or carbon fibers slowly oxidize away, leaving voids and reducing block density. Finally, blocks heated to 2,350 degrees F (1,290 degrees C) for 90 minutes to bond remaining ceramic fibers together. Developed for use on Space Shuttle and other spacecraft, rigid insulation machined to requisite shape and bonded in place.
NASA Astrophysics Data System (ADS)
Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.
2017-08-01
We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.
1998-07-01
An analysis of a mixture of herbs in Essiac, an alternative-medicine anti-cancer therapy, has shown it contains a variety of compounds which have antioxidant activity as well as the ability to block cell growth. The Essiac mixture contains burdock root, Indian rhubarb, sheep sorrel, inner bark of slippery elm, watercress, blessed thistle, red clover, and kelp. A review of patients taking Essiac shows that there was no obvious toxicity. Clinical trials are recommended to determine Essiac's efficacy.
Lim, Seng Koon; Wong, Andrew S W; de Hoog, Hans-Peter M; Rangamani, Padmini; Parikh, Atul N; Nallani, Madhavan; Sandin, Sara; Liedberg, Bo
2017-02-08
Many common amphiphiles self-assemble in water to produce heterogeneous populations of discrete and symmetric but polydisperse and multilamellar vesicles isolating the encapsulated aqueous core from the surrounding bulk. But when mixtures of amphiphiles of vastly different elastic properties co-assemble, their non-uniform molecular organization can stabilize lower symmetries and produce novel shapes. Here, using high resolution electron cryomicroscopy and tomography, we identify the spontaneous formation of a membrane morphology consisting of unilamellar tubular vesicles in dilute aqueous solutions of binary mixtures of two different amphiphiles of vastly different origins. Our results show that aqueous phase mixtures of a fluid-phase phospholipid and an amphiphilic block copolymer spontaneously assume a bimodal polymorphic character in a composition dependent manner: over a broad range of compositions (15-85 mol% polymer component), a tubular morphology co-exists with spherical vesicles. Strikingly, in the vicinity of equimolar compositions, an exclusively tubular morphology (L t ; diameter, ∼15 nm; length, >1 μm; core, ∼2.0 nm; wall, ∼5-6 nm) emerges in an apparent steady state. Theory suggests that the spontaneous stabilization of cylindrical vesicles, unaided by extraneous forces, requires a significant spontaneous bilayer curvature, which in turn necessitates a strongly asymmetric membrane composition. We confirm that such dramatic compositional asymmetry is indeed produced spontaneously in aqueous mixtures of a lipid and polymer through two independent biochemical assays - (1) reduction in the quenching of fluorophore-labeled lipids and (2) inhibition in the activity of externally added lipid-hydrolyzing phospholipase A2, resulting in a significant enrichment of the polymer component in the outer leaflet. Taken together, these results illustrate the coupling of the membrane shape with local composition through spontaneous curvature generation under conditions of asymmetric distribution of mixtures of disparate amphiphiles.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
1979-09-01
KEY WORDS (Continue on revmrem elde It necmmemry and Identity by block number) Target Descriptions GIFT Code C0MGE0M Descriptions FASTGEN Code...which accepts the COMGEOM target description and 1 2 produces the shotline data is the GIFT ’ code. The GIFT code evolved 3 4 from and has...the COMGEOM/ GIFT methodology, while the Navy and Air Force use the PATCH/SHOTGEN-FASTGEN methodology. Lawrence W. Bain, Mathew J. Heisinger
Evaluation of three coding schemes designed for improved data communication
NASA Technical Reports Server (NTRS)
Snelsire, R. W.
1974-01-01
Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.
Navier-Stokes analysis of cold scramjet-afterbody flows
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Engelund, Walter C.; Eleshaky, Mohamed E.
1989-01-01
The progress of two efforts in coding solutions of Navier-Stokes equations is summarized. The first effort concerns a 3-D space marching parabolized Navier-Stokes (PNS) code being modified to compute the supersonic mixing flow through an internal/external expansion nozzle with multicomponent gases. The 3-D PNS equations, coupled with a set of species continuity equations, are solved using an implicit finite difference scheme. The completed work is summarized and includes code modifications for four chemical species, computing the flow upstream of the upper cowl for a theoretical air mixture, developing an initial plane solution for the inner nozzle region, and computing the flow inside the nozzle for both a N2/O2 mixture and a Freon-12/Ar mixture, and plotting density-pressure contours for the inner nozzle region. The second effort concerns a full Navier-Stokes code. The species continuity equations account for the diffusion of multiple gases. This 3-D explicit afterbody code has the ability to use high order numerical integration schemes such as the 4th order MacCormack, and the Gottlieb-MacCormack schemes. Changes to the work are listed and include, but are not limited to: (1) internal/external flow capability; (2) new treatments of the cowl wall boundary conditions and relaxed computations around the cowl region and cowl tip; (3) the entering of the thermodynamic and transport properties of Freon-12, Ar, O, and N; (4) modification to the Baldwin-Lomax turbulence model to account for turbulent eddies generated by cowl walls inside and external to the nozzle; and (5) adopting a relaxation formula to account for the turbulence in the mixing shear layer.
NASA Astrophysics Data System (ADS)
Mense, Mario; Schindelhauer, Christian
We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
Light Infantry in the Defense of Urban Europe.
1986-12-14
if applicable) 6c. ADDRESS (City, State, and ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Fort Leavenworth, Kansas 66027-6900 Ba. NAME OF FUNDING...SPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) Sc. ADDRESS (City, State, and ZIP Code ) 10...PAGE COUNT wo - EFROM TO144 16. SUPPLEMENTARY NOTATION 17. COSATI CODES A*SUBJECT TERMS (Continue on reverse if necessary and identify by block
1983-03-08
tlh repow ) !Unclassified lie. DECLASSI FICATION/ DOWNGRADING SCHEDULE 16. DISTRIBUTION STATEMENT ( of this Report) Distribution Unlimited, Approved for...a block copolymer can sometimes be transformed into a homogeneous, disordered structure. The tem- perature of the transition depends on the degree of ...probably that the morphology is gradually transformed from spherical to cylindrical and eventually to lamellar packing. There is, however, no evidence of
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Saturation of recognition elements blocks evolution of new tRNA identities
Saint-Léger, Adélaïde; Bello, Carla; Dans, Pablo D.; Torres, Adrian Gabriel; Novoa, Eva Maria; Camacho, Noelia; Orozco, Modesto; Kondrashov, Fyodor A.; Ribas de Pouplana, Lluís
2016-01-01
Understanding the principles that led to the current complexity of the genetic code is a central question in evolution. Expansion of the genetic code required the selection of new transfer RNAs (tRNAs) with specific recognition signals that allowed them to be matured, modified, aminoacylated, and processed by the ribosome without compromising the fidelity or efficiency of protein synthesis. We show that saturation of recognition signals blocks the emergence of new tRNA identities and that the rate of nucleotide substitutions in tRNAs is higher in species with fewer tRNA genes. We propose that the growth of the genetic code stalled because a limit was reached in the number of identity elements that can be effectively used in the tRNA structure. PMID:27386510
On complexity of trellis structure of linear block codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1990-01-01
The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Heidegger, Nathan J.; Delaney, Robert A.
1999-01-01
The overall objective of this study was to evaluate the effects of turbulence models in a 3-D numerical analysis on the wake prediction capability. The current version of the computer code resulting from this study is referred to as ADPAC v7 (Advanced Ducted Propfan Analysis Codes -Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code used and modified under Task 15 of NASA Contract NAS3-27394. The ADPAC program is based on a flexible multiple-block and discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Turbulence models now available in the ADPAC code are: a simple mixing-length model, the algebraic Baldwin-Lomax model with user defined coefficients, the one-equation Spalart-Allmaras model, and a two-equation k-R model. The consolidated ADPAC code is capable of executing in either a serial or parallel computing mode from a single source code.
MRNA and miRNA expression patterns associated to pathways linked to metal mixture health effects.
Martínez-Pacheco, M; Hidalgo-Miranda, A; Romero-Córdoba, S; Valverde, M; Rojas, E
2014-01-10
Metals are a threat to human health by increasing disease risk. Experimental data have linked altered miRNA expression with exposure to some metals. MiRNAs comprise a large family of non-coding single-stranded molecules that primarily function to negatively regulate gene expression post-transcriptionally. Although several human populations are exposed to low concentrations of As, Cd and Pb as a mixture, most toxicology research focuses on the individual effects that these metals exert. Thus, this study aims to evaluate global miRNA and mRNA expression changes induced by a metal mixture containing NaAsO2, CdCl2, Pb(C2H3O2)2·3H2O and to predict possible metal-associated disease development under these conditions. Our results show that this metal mixture results in a miRNA expression profile that may be responsible for the mRNA expression changes observed under experimental conditions in which coding proteins are involved in cellular processes, including cell death, growth and proliferation related to the metal-associated inflammatory response and cancer. © 2013 Elsevier B.V. All rights reserved.
Effect of Long Term Low-Level Gamma Radiation on Thermal Sensitivity of RDX/HMX Mixtures
1976-11-01
1.1x10 R. It was concluded that the slight exothermic reaction before the 3^6 HMX polymorphic transition could be caused by a radiation-induced...Radiation on Thermal Sensitivity of RDX / HMX Mixtures 5. TYPE OF REPORT 4 PERIOD COVERED Final Report 6. PERFORMING ORG. REPORT NUMBER 7...and Identity by block number) Gamma radiation Weight loss HMX Impact sensitivity test RDX Vacuum stability test DTA Infrared spectrometry TGA
Nucleation in Polymers and Soft Matter
NASA Astrophysics Data System (ADS)
Xu, Xiaofei; Ting, Christina L.; Kusaka, Isamu; Wang, Zhen-Gang
2014-04-01
Nucleation is a ubiquitous phenomenon in many physical, chemical, and biological processes. In this review, we describe recent progress on the theoretical study of nucleation in polymeric fluids and soft matter, including binary mixtures (polymer blends, polymers in poor solvents, compressible polymer-small molecule mixtures), block copolymer melts, and lipid membranes. We discuss the methodological development for studying nucleation as well as novel insights and new physics obtained in the study of the nucleation behavior in these systems.
Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.
Lan, Cuiling; Shi, Guangming; Wu, Feng
2010-04-01
Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
NASA Technical Reports Server (NTRS)
Steinbrenner, John P.; Chawner, John R.
1992-01-01
GRIDGEN is a government domain software package for interactive generation of multiple block grids around general configurations. Though it has been freely available since 1989, it has not been widely embraced by the internal flow community due to a misconception that it was designed for external flow use only. In reality GRIDGEN has always worked for internal flow applications, and GRIDGEN ongoing enhancements are increasing the quality of and efficiency with which grids for external and internal flow problems may be constructed. The software consists of four codes used to perform the four steps of the grid generation process. GRIDBLOCK is first used to decompose the flow domain into a collection of component blocks and then to establish interblock connections and flow solver boundary conditions. GRIDGEN2D is then used to generate surface grids on the outer shell of each component block. GRIDGEN3D generates grid points on the interior of each block, and finally GRIDVUE3D is used to inspect the resulting multiple block grid. Three of these codes (GRIDBLOCK, GRIDGEN2D, and GRIDVUE3D) are highly interactive and graphical in nature, and currently run on Silicon Graphics, Inc., and IBM RS/6000 workstations. The lone batch code (GRIDGEN3D) may be run on any of several Unix based platforms. Surface grid generation in GRIDGEN2D is being improved with the addition of higher order surface definitions (NURBS and parametric surfaces input in IGES format and bicubic surfaces input in PATRAN Neutral File format) and double precision mathematics. In addition, two types of automation have been added to GRIDGEN2D that reduce the learning curve slope for new users and eliminate work for experienced users. Volume grid generation using GRIDGEN3D has been improved via the addition of an advanced hybrid control function formulation that provides both orthogonality and clustering control at the block faces and clustering control on the block interior.
DIVWAG Model Documentation. Volume II. Programmer/Analyst Manual. Part 5.
1976-07-01
Words Number Mission type: I=DAFS; 2= CAS 1 8 Estimated X coordinate of target 1 9 Estimated Y coordinate of target 1 10 Reject code: 0-mission unit... CAS 1 8 Abort indicator: O-no abort; 1-abort 1 9 X coordinate of target 1 10 " :oordinate of target 1 11 Aircraft munitions item code 6 12-17 Aircraft...L300 CALL TRNSMT U. TO TRANSMIT FIRST BLOCK OF DATA L100 YES REQUEST FOR INPU? LIOI CA " TRAN21I TO TRANSMIT LAST BLOCK OF DATA Figure VII-3-B- 10
Sanders, Elizabeth A; Berninger, Virginia W; Abbott, Robert D
Sequential regression was used to evaluate whether language-related working memory components uniquely predict reading and writing achievement beyond cognitive-linguistic translation for students in Grades 4 through 9 ( N = 103) with specific learning disabilities (SLDs) in subword handwriting (dysgraphia, n = 25), word reading and spelling (dyslexia, n = 60), or oral and written language (oral and written language learning disabilities, n = 18). That is, SLDs are defined on the basis of cascading level of language impairment (subword, word, and syntax/text). A five-block regression model sequentially predicted literacy achievement from cognitive-linguistic translation (Block 1); working memory components for word-form coding (Block 2), phonological and orthographic loops (Block 3), and supervisory focused or switching attention (Block 4); and SLD groups (Block 5). Results showed that cognitive-linguistic translation explained an average of 27% and 15% of the variance in reading and writing achievement, respectively, but working memory components explained an additional 39% and 27% of variance. Orthographic word-form coding uniquely predicted nearly every measure, whereas attention switching uniquely predicted only reading. Finally, differences in reading and writing persisted between dyslexia and dysgraphia, with dysgraphia higher, even after controlling for Block 1 to 4 predictors. Differences in literacy achievement between students with dyslexia and oral and written language learning disabilities were largely explained by the Block 1 predictors. Applications to identifying and teaching students with these SLDs are discussed.
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
Chen, Qianqian; Chen, Xiaoxiang; Zhang, Sichao; Lan, Ke; Lu, Jian; Zhang, Chiyu
2015-01-01
The development of simple, accurate, rapid and cost-effective technologies for mutation detection is crucial to the early diagnosis and prevention of numerous genetic diseases, pharmacogenetics, and drug resistance. Proofreading PCR (PR-PCR) was developed for mutation detection in 1998 but is rarely applied due to its low efficiency in allele discrimination. Here we developed a modified PR-PCR method using a ddNTP-blocked primer and a mixture of DNA polymerases with and without the 3'-5' proofreading function. The ddNTP-blocked primer exhibited the best blocking efficiency to avoid nonspecific primer extension while the mixture of a tiny amount of high-fidelity DNA polymerase with a routine amount of Taq DNA polymerase provided the best discrimination and amplification effects. The modified PR-PCR method is quite capable of detecting various mutation types, including point mutations and insertions/deletions (indels), and allows discrimination amplification when the mismatch is located within the last eight nucleotides from the 3'-end of the ddNTP-blocked primer. The modified PR-PCR has a sensitivity of 1-5 × 102 copies and a selectivity of 5 × 10-5 mutant among 107 copies of wild-type DNA. It showed a 100% accuracy rate in the detection of P72R germ-line mutation in the TP53 gene among 60 clinical blood samples, and a high potential to detect rifampin-resistant mutations at low frequency in Mycobacterium tuberculosis using an adaptor and a fusion-blocked primer. These results suggest that the modified PR-PCR technique is effective in detection of various mutations or polymorphisms as a simple, sensitive and promising approach. PMID:25915410
Feedback Effects in Computer-Based Skill Learning
1989-09-12
SUPPLEMENTARY NOTATION 17 COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) r FIELD GROUP SUB-GROUP I...rather than tangible feedback ( Barringer & Gholson, 1979) and when they receive punishment (either alone or witih reward) rather than reward alone...34graphed" response latencies across the four conditions ( r = .58), indicating that subjects were sensitive to block-by-block trends in their response
NIRP Core Software Suite v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitener, Dustin Heath; Folz, Wesley; Vo, Duong
The NIRP Core Software Suite is a core set of code that supports multiple applications. It includes miscellaneous base code for data objects, mathematic equations, and user interface components; and the framework includes several fully-developed software applications that exist as stand-alone tools to compliment other applications. The stand-alone tools are described below. Analyst Manager: An application to manage contact information for people (analysts) that use the software products. This information is often included in generated reports and may be used to identify the owners of calculations. Radionuclide Viewer: An application for viewing the DCFPAK radiological data. Compliments the Mixture Managermore » tool. Mixture Manager: An application to create and manage radionuclides mixtures that are commonly used in other applications. High Explosive Manager: An application to manage explosives and their properties. Chart Viewer: An application to view charts of data (e.g. meteorology charts). Other applications may use this framework to create charts specific to their data needs.« less
Phase Behavior of a Single Structured Ionomer Chain in Solution
Aryal, Dipak; Etampawala, Thusitha; Perahia, Dvora; ...
2014-08-14
Structured polymers offer a means to tailor transport pathways within mechanically stable manifolds. Here we examine the building block of such a membrane, namely a single large pentablock co-polymer that consist of a center block of a randomly sulfonated polystyrene, designed for transport, tethered to poly-ethylene-r-propylene and end-capped by poly-t-butyl styrene, for mechanical stability,using molecular dynamics simulations. The polymer structure in a cyclohexane-heptane mixture, a technologically viable solvent, and in water, a poor solvent for all segments and a ubiquitous substance is extracted. In all solvents the pentablock collapsed into nearly spherical aggregates where the ionic block is segregated. Inmore » hydrophobic solvents, the ionic block resides in the center, surrounded by swollen intermix of flexible and end blocks. In water all blocks are collapsed with the sulfonated block residing on the surface. Our results demonstrate that solvents drive different local nano-segregation, providing a gateway to assemble membranes with controlled topology.« less
Wartime Tracking of Class I Surface Shipments from Production or Procurement to Destination
1992-04-01
Armed Forces I ICAF-FAP National Defense University 6c. ADDRESS (City, State, ard ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ) Fort Lesley J...INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If applicable) 9c. ADDRESS (City, State, and ZIP Code ) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK...COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue on reverse
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
A users manual for the RAMP2 computer code is provided. The RAMP2 code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields. The general structure and operation of RAMP2 are discussed. A user input/output guide for the modified TRAN72 computer code and the RAMP2F code is given. The application and use of the BLIMPJ module are considered. Sample problems involving the space shuttle main engine and motor are included.
NASA Technical Reports Server (NTRS)
Smith, S. D.
1984-01-01
All of the elements used in the Reacting and Multi-Phase (RAMP2) computer code are described in detail. The code can be used to model the dominant phenomena which affect the prediction of liquid and solid rocket nozzle and orbital plume flow fields.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
Yu, Lin; Zhang, Zheng; Zhang, Huan; Ding, Jiandong
2009-06-08
A facile method to obtain a thermoreversible physical hydrogel was found by simply mixing an aqueous sol of a block copolymer with a precipitate of a similar copolymer but with a different block ratio. Two ABA-type triblock copolymers poly(D,L-lactic acid-co-glycolic acid)-B-poly(ethylene glycol)-B-poly(D,L-lactic acid-co-glycolic acid) (PLGA-PEG-PLGA) were synthesized. One sample in water was a sol in a broad temperature region, while the other in water was just a precipitate. The mixture of these two samples with a certain mix ratio underwent, however, a sol-to-gel-to-precipitate transition upon an increase of temperature. A dramatic tuning of the sol-gel transition temperature was conveniently achieved by merely varying mix ratio, even in the case of a similar molecular weight. Our study indicates that the balance of hydrophobicity and hydrophilicity within this sort of amphiphilic copolymers is critical to the inverse thermal gelation in water resulting from aggregation of micelles. The availability of encapsulation and sustained release of lysozyme, a model protein by the thermogelling systems was confirmed. This "mix" method provides a very convenient approach to design injectable thermogelling biomaterials with a broad adjustable window, and the novel copolymer mixture platform is potentially used in drug delivery and other biomedical applications.
Gettel, Douglas L; Sanborn, Jeremy; Patel, Mira A; de Hoog, Hans-Peter; Liedberg, Bo; Nallani, Madhavan; Parikh, Atul N
2014-07-23
Substrate-mediated fusion of small polymersomes, derived from mixtures of lipids and amphiphilic block copolymers, produces hybrid, supported planar bilayers at hydrophilic surfaces, monolayers at hydrophobic surfaces, and binary monolayer/bilayer patterns at amphiphilic surfaces, directly responding to local measures of (and variations in) surface free energy. Despite the large thickness mismatch in their hydrophobic cores, the hybrid membranes do not exhibit microscopic phase separation, reflecting irreversible adsorption and limited lateral reorganization of the polymer component. With increasing fluid-phase lipid fraction, these hybrid, supported membranes undergo a fluidity transition, producing a fully percolating fluid lipid phase beyond a critical area fraction, which matches the percolation threshold for the immobile point obstacles. This then suggests that polymer-lipid hybrid membranes might be useful models for studying obstructed diffusion, such as occurs in lipid membranes containing proteins.
Accelerated Gaussian mixture model and its application on image segmentation
NASA Astrophysics Data System (ADS)
Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi
2013-03-01
Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.
Personnel-General: Army Substance Abuse Program Civilian Services
2001-10-15
destroyed. Additional reproduction and distribution of completed records is prohibited. c. SECTION I. IDENTIFICATION. (1) Block I. Date of Report. Enter...AMPHETAMINE B BARBITUATES C COCAINE H HALLUCINOGENS (LSD) M METHAQUALONE, SEDATIVE, HYPNOTIC , OR ANXIOLYTIC O OPIATES P PHENCYCLIDINE (PCP) T CANNABIS...Table 5–6 Codes for TABLE F (T-DIAG-CODE) Code Rejection Reason 30390 ALCOHOL DEPENDENCE 30400 OPIOID DEPENDENCE 30410 SEDATIVE, HYPNOTIC , OR ANXIOLYTIC
Inclusion Complexes of Diisopropyl Fluorophosphate with Cyclodextrins.
1987-09-01
SUPPLEMENTARY NOTATION For Submission to Journal of Catalysis. 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block...number) FIELD GROUP SUB-GROUP 19. ABSTRACT (Continue on reverse if necessary and identify by block number) See Attached DTIC S ELECTESNOV 2 3 1987D 20
Monte Carlo study of four dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2012-01-01
A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.
NASA Technical Reports Server (NTRS)
Thompson, Richard A.; Lee, Kam-Pui; Gupta, Roop N.
1991-01-01
The computer codes developed here provide self-consistent thermodynamic and transport properties for equilibrium air for temperatures from 500 to 30000 K over a temperature range of 10 (exp -4) to 10 (exp -2) atm. These properties are computed through the use of temperature dependent curve fits for discrete values of pressure. Interpolation is employed for intermediate values of pressure. The curve fits are based on mixture values calculated from an 11-species air model. Individual species properties used in the mixture relations are obtained from a recent study by the present authors. A review and discussion of the sources and accuracy of the curve fitted data used herein are given in NASA RP 1260.
Cork-resin ablative insulation for complex surfaces and method for applying the same
NASA Technical Reports Server (NTRS)
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Numerical Analysis of Convection/Transpiration Cooling
NASA Technical Reports Server (NTRS)
Glass, David E.; Dilley, Arthur D.; Kelly, H. Neale
1999-01-01
An innovative concept utilizing the natural porosity of refractory-composite materials and hydrogen coolant to provide CONvective and TRANspiration (CONTRAN) cooling and oxidation protection has been numerically studied for surfaces exposed to a high heat flux, high temperature environment such as hypersonic vehicle engine combustor walls. A boundary layer code and a porous media finite difference code were utilized to analyze the effect of convection and transpiration cooling on surface heat flux and temperature. The boundary, layer code determined that transpiration flow is able to provide blocking of the surface heat flux only if it is above a minimum level due to heat addition from combustion of the hydrogen transpirant. The porous media analysis indicated that cooling of the surface is attained with coolant flow rates that are in the same range as those required for blocking, indicating that a coupled analysis would be beneficial.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Problem-Solving Under Time Constraints: Alternatives for the Commander’s Estimate
1990-03-26
CHOOL OF ADVANCED MILITAR (If applicable) STUDIES, USAC&GSC IATZL-SWV 6. ADDRESS (City, State, and ZIP Code ) 7b. ADDRESS (City, State, and ZIP Code ...NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP DECISIONJ*MAKING...OF RESPONSIBLE INDIVIDUAL 22b. TELEPHONE (Include Area Code ) 122c. OFFICE SYMBOL MAJ TIMOTHY D. LYNCH 9 684-3437 1 AT71-.qWV DO Form 1473, JUN 86
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Xenomicrobiology: a roadmap for genetic code engineering.
Acevedo-Rocha, Carlos G; Budisa, Nediljko
2016-09-01
Biology is an analytical and informational science that is becoming increasingly dependent on chemical synthesis. One example is the high-throughput and low-cost synthesis of DNA, which is a foundation for the research field of synthetic biology (SB). The aim of SB is to provide biotechnological solutions to health, energy and environmental issues as well as unsustainable manufacturing processes in the frame of naturally existing chemical building blocks. Xenobiology (XB) goes a step further by implementing non-natural building blocks in living cells. In this context, genetic code engineering respectively enables the re-design of genes/genomes and proteins/proteomes with non-canonical nucleic (XNAs) and amino (ncAAs) acids. Besides studying information flow and evolutionary innovation in living systems, XB allows the development of new-to-nature therapeutic proteins/peptides, new biocatalysts for potential applications in synthetic organic chemistry and biocontainment strategies for enhanced biosafety. In this perspective, we provide a brief history and evolution of the genetic code in the context of XB. We then discuss the latest efforts and challenges ahead for engineering the genetic code with focus on substitutions and additions of ncAAs as well as standard amino acid reductions. Finally, we present a roadmap for the directed evolution of artificial microbes for emancipating rare sense codons that could be used to introduce novel building blocks. The development of such xenomicroorganisms endowed with a 'genetic firewall' will also allow to study and understand the relation between code evolution and horizontal gene transfer. © 2016 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Structural ceramics containing electric arc furnace dust.
Stathopoulos, V N; Papandreou, A; Kanellopoulou, D; Stournaras, C J
2013-11-15
In the present work the stabilization of electric arc furnace dust EAFD waste in structural clay ceramics was investigated. EAFD was collected over eleven production days. The collected waste was characterized for its chemical composition by Flame Atomic Absorption Spectroscopy. By powder XRD the crystal structure was studied while the fineness of the material was determined by a laser particle size analyzer. The environmental characterization was carried out by testing the dust according to EN12457 standard. Zn, Pb and Cd were leaching from the sample in significant amounts. The objective of this study is to investigate the stabilization properties of EAFD/clay ceramic structures and the potential of EAFD utilization into structural ceramics production (blocks). Mixtures of clay with 2.5% and 5% EAFD content were studied by TG/DTA, XRD, SEM, EN12457 standard leaching and mechanical properties as a function of firing temperature at 850, 900 and 950 °C. All laboratory facilities maintained 20 ± 1 °C. Consequently, a pilot-scale experiment was conducted with an addition of 2.5% and 5% EAFD to the extrusion mixture for the production of blocks. During blocks manufacturing, the firing step reached 950 °C in a tunnel kiln. Laboratory heating/cooling gradients were similar to pilot scale production firing. The as produced blocks were then subjected to quality control tests, i.e. dimensions according to EN772-17, water absorbance according to EN772-6, and compressive strength according to EN772-1 standard, in laboratory facilities certified under EN17025. The data obtained showed that the incorporation of EAFD resulted in an increase of mechanical strength. Moreover, leaching tests performed according to the Europeans standards on the EAFD-block samples showed that the quantities of heavy metals leached from crushed blocks were within the regulatory limits. Thus the EAFD-blocks can be regarded as material of no environmental concern. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graeves, R. A.
1980-01-01
A user's guide for a computer code 'COLTS' (Coupled Laminar and Turbulent Solutions) is provided which calculates the laminar and turbulent hypersonic flows with radiation and coupled ablation injection past a Jovian entry probe. Time-dependent viscous-shock-layer equations are used to describe the flow field. These equations are solved by an explicit, two-step, time-asymptotic finite-difference method. Eddy viscosity in the turbulent flow is approximated by a two-layer model. In all, 19 chemical species are used to describe the injection of carbon-phenolic ablator in the hydrogen-helium gas mixture. The equilibrium composition of the mixture is determined by a free-energy minimization technique. A detailed frequency dependence of the absorption coefficient for various species is considered to obtain the radiative flux. The code is written for a CDC-CYBER-203 computer and is capable of providing solutions for ablated probe shapes also.
Microphase Separation in Oil-Water Mixtures Containing Hydrophilic and Hydrophobic Ions
NASA Astrophysics Data System (ADS)
Tasios, Nikos; Samin, Sela; van Roij, René; Dijkstra, Marjolein
2017-11-01
We develop a lattice-based Monte Carlo simulation method for charged mixtures capable of treating dielectric heterogeneities. Using this method, we study oil-water mixtures containing an antagonistic salt, with hydrophilic cations and hydrophobic anions. Our simulations reveal several phases with a spatially modulated solvent composition, in which the ions partition between water-rich and water-poor regions according to their affinity. In addition to the recently observed lamellar phase, we find tubular and droplet phases, reminiscent of those found in block copolymers and surfactant systems. Interestingly, these structures stem from ion-mediated interactions, which allows for tuning of the phase behavior via the concentrations, the ionic properties, and the temperature.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Guillemet, Baptiste; Faatz, Michael; Gröhn, Franziska; Wegner, Gerhard; Gnanou, Yves
2006-02-14
Particles of amorphous calcium carbonate (ACC), formed in situ from calcium chloride by the slow release of carbon dioxide by alkaline hydrolysis of dimethyl carbonate in water, are stabilized against coalescence in the presence of very small amounts of double hydrophilic block copolymers (DHBCs) composed of poly(ethylene oxide) (PEO) and poly(acrylic acid) (PAA) blocks. Under optimized conditions, spherical particles of ACC with diameters less than 100 nm and narrow size distribution are obtained at a concentration of only 3 ppm of PEO-b-PAA as additive. Equivalent triblock or star DHBCs are compared to diblock copolymers. The results are interpreted assuming an interaction of the PAA blocks with the surface of the liquid droplets of the concentrated CaCO3 phase, formed by phase separation from the initially homogeneous reaction mixture. The adsorption layer of the block copolymer protects the liquid precursor of ACC from coalescence and/or coagulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.
1975-10-01
The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P$sub 1$) in up to three- dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First- order perturbation analysis capability is available at the macroscopic cross section level. (auth)
Initial development of 5D COGENT
NASA Astrophysics Data System (ADS)
Cohen, R. H.; Lee, W.; Dorf, M.; Dorr, M.
2015-11-01
COGENT is a continuum gyrokinetic edge code being developed by the by the Edge Simulation Laboratory (ESL) collaboration. Work to date has been primarily focussed on a 4D (axisymmetric) version that models transport properties of edge plasmas. We have begun development of an initial 5D version to study edge turbulence, with initial focus on kinetic effects on blob dynamics and drift-wave instability in a shearless magnetic field. We are employing compiler directives and preprocessor macros to create a single source code that can be compiled in 4D or 5D, which helps to ensure consistency of physics representation between the two versions. A key aspect of COGENT is the employment of mapped multi-block grid capability to handle the complexity of diverter geometry. It is planned to eventually exploit this capability to handle magnetic shear, through a series of successively skewed unsheared grid blocks. The initial version has an unsheared grid and will be used to explore the degree to which a radial domain must be block decomposed. We report on the status of code development and initial tests. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344.
Ordered porous mesostructured materials from nanoparticle-block copolymer self-assembly
Warren, Scott; Wiesner, Ulrich; DiSalvo, Jr., Francis J
2013-10-29
The invention provides mesostructured materials and methods of preparing mesostructured materials including metal-rich mesostructured nanoparticle-block copolymer hybrids, porous metal-nonmetal nanocomposite mesostructures, and ordered metal mesostructures with uniform pores. The nanoparticles can be metal, metal alloy, metal mixture, intermetallic, metal-carbon, metal-ceramic, semiconductor-carbon, semiconductor-ceramic, insulator-carbon or insulator-ceramic nanoparticles, or combinations thereof. A block copolymer/ligand-stabilized nanoparticle solution is cast, resulting in the formation of a metal-rich (or semiconductor-rich or insulator-rich) mesostructured nanoparticle-block copolymer hybrid. The hybrid is heated to an elevated temperature, resulting in the formation of an ordered porous nanocomposite mesostructure. A nonmetal component (e.g., carbon or ceramic) is then removed to produce an ordered mesostructure with ordered and large uniform pores.
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
Robot Task Commander with Extensible Programming Environment
NASA Technical Reports Server (NTRS)
Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)
2014-01-01
A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.
Introduction to Forward-Error-Correcting Coding
NASA Technical Reports Server (NTRS)
Freeman, Jon C.
1996-01-01
This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered.
14 CFR Sec. 1-4 - System of accounts coding.
Code of Federal Regulations, 2010 CFR
2010-01-01
... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is assigned for each balance sheet and profit and loss account. Each balance sheet account is numbered sequentially, within blocks, designating basic balance sheet classifications. The first two digits of the four...
Two-Dimensional Liquid Chromatography Analysis of Polystyrene/Polybutadiene Block Copolymers.
Lee, Sanghoon; Choi, Heejae; Chang, Taihyun; Staal, Bastiaan
2018-05-15
A detailed characterization of a commercial polystyrene/polybutadiene block copolymer material (Styrolux) was carried out using two-dimensional liquid chromatography (2D-LC). The Styrolux is prepared by statistical linking reaction of two different polystyrene- block-polybutadienyl anion precursors with a multivalent linking agent. Therefore, it is a mixture of a number of branched block copolymers different in molecular weight, composition, and chain architecture. While individual LC analysis, including size exclusion chromatography, interaction chromatography, or liquid chromatography at critical condition, is not good enough to resolve all the polymer species, 2D-LC separations coupling two chromatography methods were able to resolve all polymer species present in the sample; at least 13 block copolymer species and a homopolystyrene blended. Four different 2D-LC analyses combining a different pair of two LC methods provide their characteristic separation results. The separation characteristics of the 2D-LC separations are compared to elucidate the elution characteristics of the block copolymer species.
Combustor Computations for CO2-Neutral Aviation
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Brankovic, Andreja; Ryder, Robert C.; Huber, Marcia
2011-01-01
Knowing the pure component C(sub p)(sup 0) or mixture C(sub p) (sup 0) as computed by a flexible code such as NIST-STRAPP or McBride-Gordon, one can, within reasonable accuracy, determine the thermophysical properties necessary to predict the combustion characteristics when there are no tabulated or computed data for those fluid mixtures 3or limited results for lower temperatures. (Note: C(sub p) (sup 0) is molar heat capacity at constant pressure.) The method can be used in the determination of synthetic and biological fuels and blends using the NIST code to compute the C(sub p) (sup 0) of the mixture. In this work, the values of the heat capacity were set at zero pressure, which provided the basis for integration to determine the required combustor properties from the injector to the combustor exit plane. The McBride-Gordon code was used to determine the heat capacity at zero pressure over a wide range of temperatures (room to 6,000 K). The selected fluids were Jet-A, 224TMP (octane), and C12. It was found that each heat capacity loci were form-similar. It was then determined that the results [near 400 to 3,000 K] could be represented to within acceptable engineering accuracy with the simplified equation C(sub p) (sup 0) = A/T + B, where A and B are fluid-dependent constants and T is temperature (K).
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Immobilization of organic radioactive and non-radioactive liquid waste in a composite matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galkin, Anatoliy; Gelis, Artem V.; Castiglioni, Andrew J.
A method for immobilizing liquid radioactive waste is provided, the method having the steps of mixing waste with polymer to form a non-liquid waste; contacting the non-liquid waste with a solidifying agent to create a mixture, heating the mixture to cause the polymer, waste, and filler to irreversibly bind in a solid phase, and compressing the solid phase into a monolith. The invention also provides a method for immobilizing liquid radioactive waste containing tritium, the method having the steps of mixing liquid waste with polymer to convert the liquid waste to a non-liquid waste, contacting the non-liquid waste with amore » solidifying agent to create a mixture, heating the mixture to form homogeneous, chemically stable solid phase, and compressing the chemically stable solid phase into a final waste form, wherein the polymer comprises approximately a 9:1 weight ratio mixture of styrene block co-polymers and cross linked co-polymers of acrylamides.« less
Intra prediction using face continuity in 360-degree video coding
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; He, Yuwen; Ye, Yan
2017-09-01
This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps
NASA Astrophysics Data System (ADS)
Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.
2017-06-01
The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.
Computer Description of the Field Artillery Ammunition Supply Vehicle
1983-04-01
Combinatorial Geometry (COM-GEOM) GIFT Computer Code Computer Target Description 2& AfTNACT (Cmne M feerve shb N ,neemssalyan ify by block number) A...input to the GIFT computer code to generate target vulnerability data. F.a- 4 ono OF I NOV 5S OLETE UNCLASSIFIED SECUOITY CLASSIFICATION OF THIS PAGE...Combinatorial Geometry (COM-GEOM) desrription. The "Geometric Information for Tarqets" ( GIFT ) computer code accepts the CO!-GEOM description and
A Combinatorial Geometry Computer Description of the MEP-021A Generator Set
1979-02-01
Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] *7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack
Numerical simulation of turbulent gas flames in tubes.
Salzano, E; Marra, F S; Russo, G; Lee, J H S
2002-12-02
Computational fluid dynamics (CFD) is an emerging technique to predict possible consequences of gas explosion and it is often considered a powerful and accurate tool to obtain detailed results. However, systematic analyses of the reliability of this approach to real-scale industrial configurations are still needed. Furthermore, few experimental data are available for comparison and validation. In this work, a set of well documented experimental data related to the flame acceleration obtained within obstacle-filled tubes filled with flammable gas-air mixtures, has been simulated. In these experiments, terminal steady flame speeds corresponding to different propagation regimes were observed, thus, allowing a clear and prompt characterisation of the numerical results with respect to numerical parameters, as grid definition, geometrical parameters, as blockage ratio and to mixture parameters, as mixture reactivity. The CFD code AutoReagas was used for the simulations. Numerical predictions were compared with available experimental data and some insights into the code accuracy were determined. Computational results are satisfactory for the relatively slower turbulent deflagration regimes and became fair when choking regime is observed, whereas transition to quasi-detonation or Chapman-Jogouet (CJ) were never predicted.
1986-09-01
ORGANIZATION Gjeoteehnical Laborator WESGR-M 6c ADDRESS (City, Slate, and ZIP Code ) 7b ADDRESS(City, State. and ZIP Code ) PO Box 631 Vicksburg, MS 39180...of Engineers 8< ADDRESS(City, State, and ZIP Code ) 10 SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT.. ", 1 :, • ; I, - u It ., " ’ ~f...Springfield, VA 22161 17 COSATI CODES 18 SUBJECT TERMS (Continue-On revprse of necessary and identify by block number) " FIELD GROUP SUB GROUP
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
An Examination of the Reliability of the Organizational Assessment Package (OAP).
1981-07-01
reactiv- ity or pretest sensitization (Bracht and Glass, 1968) may occur. In this case, the change from pretest to posttest can be caused just by the...content items. The blocks for supervisor’s code were left blank, work group code was coded as all ones , and each person’s seminar number was coded in...63 5 19 .91 .74 5 (Work Group Effective- ness) 822 19 .83 .42 7 17 .90 .57 7 (Job Related Sati sfacti on ) 823 16 .91 .84 2 18 .93 .87 2 (Job Related
Neural Coding of Formant-Exaggerated Speech in the Infant Brain
ERIC Educational Resources Information Center
Zhang, Yang; Koerner, Tess; Miller, Sharon; Grice-Patil, Zach; Svec, Adam; Akbari, David; Tusler, Liz; Carney, Edward
2011-01-01
Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of…
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
The Viterbi algorithm is indeed a very simple and efficient method of implementing the maximum likelihood decoding. However, if we take advantage of the structural properties in a trellis section, other efficient trellis-based decoding algorithms can be devised. Recently, an efficient trellis-based recursive maximum likelihood decoding (RMLD) algorithm for linear block codes has been proposed. This algorithm is more efficient than the conventional Viterbi algorithm in both computation and hardware requirements. Most importantly, the implementation of this algorithm does not require the construction of the entire code trellis, only some special one-section trellises of relatively small state and branch complexities are needed for constructing path (or branch) metric tables recursively. At the end, there is only one table which contains only the most likely code-word and its metric for a given received sequence r = (r(sub 1), r(sub 2),...,r(sub n)). This algorithm basically uses the divide and conquer strategy. Furthermore, it allows parallel/pipeline processing of received sequences to speed up decoding.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
NASA Technical Reports Server (NTRS)
Flores, J.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.; Gundy, K.
1986-01-01
A fast diagonalized Beam-Warming algorithm is coupled with a zonal approach to solve the three-dimensional Euler/Navier-Stokes equations. The computer code, called Transonic Navier-Stokes (TNS), uses a total of four zones for wing configurations (or can be extended to complete aircraft configurations by adding zones). In the inner blocks near the wing surface, the thin-layer Navier-Stokes equations are solved, while in the outer two blocks the Euler equations are solved. The diagonal algorithm yields a speedup of as much as a factor of 40 over the original algorithm/zonal method code. The TNS code, in addition, has the capability to model wind tunnel walls. Transonic viscous solutions are obtained on a 150,000-point mesh for a NACA 0012 wing. A three-order-of-magnitude drop in the L2-norm of the residual requires approximately 500 iterations, which takes about 45 min of CPU time on a Cray-XMP processor. Simulations are also conducted for a different geometrical wing called WING C. All cases show good agreement with experimental data.
MODELING OF MULTICOMPONENT PERVAPORATION FOR REMOVAL OF VOLATILE ORGANIC COMPOUNDS FROM WATER
A resistance-in-series model was used to study the pervaporation of multiple volatile organic compounds (VOCs)-water mixtures. Permeation experiments were carried out for four membranes: poly(dimethylsiloxane) (PDMS), polyether-block-polyamides (PEBA), polyurethane (PUR) and sil...
Resonant Acoustic Determination of Complex Elastic Moduli
1991-03-01
Classification uncssified/.unimled - sae as report [] DnC ui, Unclassified 22a Name of Responsible Individual 22b Telephone (Include Area code) 22c Office Symbol...4090 DISP " Run: "Block2$ 4100 WAIT 1 4110 DISP "Mode: "Blocic3$ 4120 WAIT 1 4130 DISP" Date: "Block4$ 4140 WAIT 1 4150 DISP "Mass: "Mass;"grams
NASA Astrophysics Data System (ADS)
Nicolae, Doina; Talianu, Camelia; Vasilescu, Jeni; Nicolae, Victor; Stachlewska, Iwona S.
2018-04-01
A Python code was developed to automatically retrieve the aerosol type (and its predominant component in the mixture) from EARLINET's 3 backscatter and 2 extinction data. The typing relies on Artificial Neural Networks which are trained to identify the most probable aerosol type from a set of mean-layer intensive optical parameters. This paper presents the use and limitations of the code with respect to the quality of the inputed lidar profiles, as well as with the assumptions made in the aerosol model.
Methods and codes for neutronic calculations of the MARIA research reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrzejewski, K.; Kulikowska, T.; Bretscher, M. M.
2002-02-18
The core of the MARIA high flux multipurpose research reactor is highly heterogeneous. It consists of beryllium blocks arranged in 6 x 8 matrix, tubular fuel assemblies, control rods and irradiation channels. The reflector is also heterogeneous and consists of graphite blocks clad with aluminum. Its structure is perturbed by the experimental beam tubes. This paper presents methods and codes used to calculate the MARIA reactor neutronics characteristics and experience gained thus far at IAE and ANL. At ANL the methods of MARIA calculations were developed in connection with the RERTR program. At IAE the package of programs was developedmore » to help its operator in optimization of fuel utilization.« less
A grid generation system for multi-disciplinary design optimization
NASA Technical Reports Server (NTRS)
Jones, William T.; Samareh-Abolhassani, Jamshid
1995-01-01
A general multi-block three-dimensional volume grid generator is presented which is suitable for Multi-Disciplinary Design Optimization. The code is timely, robust, highly automated, and written in ANSI 'C' for platform independence. Algebraic techniques are used to generate and/or modify block face and volume grids to reflect geometric changes resulting from design optimization. Volume grids are generated/modified in a batch environment and controlled via an ASCII user input deck. This allows the code to be incorporated directly into the design loop. Generated volume grids are presented for a High Speed Civil Transport (HSCT) Wing/Body geometry as well a complex HSCT configuration including horizontal and vertical tails, engine nacelles and pylons, and canard surfaces.
Study on a novel laser target detection system based on software radio technique
NASA Astrophysics Data System (ADS)
Song, Song; Deng, Jia-hao; Wang, Xue-tian; Gao, Zhen; Sun, Ji; Sun, Zhi-hui
2008-12-01
This paper presents that software radio technique is applied to laser target detection system with the pseudo-random code modulation. Based on the theory of software radio, the basic framework of the system, hardware platform, and the implementation of the software system are detailed. Also, the block diagram of the system, DSP circuit, block diagram of the pseudo-random code generator, and soft flow diagram of signal processing are designed. Experimental results have shown that the application of software radio technique provides a novel method to realize the modularization, miniaturization and intelligence of the laser target detection system, and the upgrade and improvement of the system will become simpler, more convenient, and cheaper.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
Effects of Fuel Distribution on Detonation Tube Performance
NASA Technical Reports Server (NTRS)
Perkins, H. Douglas; Sung, Chih-Jen
2003-01-01
A pulse detonation engine uses a series of high frequency intermittent detonation tubes to generate thrust. The process of filling the detonation tube with fuel and air for each cycle may yield non-uniform mixtures. Uniform mixing is commonly assumed when calculating detonation tube thrust performance. In this study, detonation cycles featuring idealized non-uniform Hz/air mixtures were analyzed using a two-dimensional Navier-Stokes computational fluid dynamics code with detailed chemistry. Mixture non-uniformities examined included axial equivalence ratio gradients, transverse equivalence ratio gradients, and partially fueled tubes. Three different average test section equivalence ratios were studied; one stoichiometric, one fuel lean, and one fuel rich. All mixtures were detonable throughout the detonation tube. Various mixtures representing the same average test section equivalence ratio were shown to have specific impulses within 1% of each other, indicating that good fuel/air mixing is not a prerequisite for optimal detonation tube performance under conditions investigated.
Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK
2014-01-01
Background Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system’s set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This “code-based” approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. Results As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. Conclusions The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts. PMID:24725437
Simulations of pattern dynamics for reaction-diffusion systems via SIMULINK.
Wang, Kaier; Steyn-Ross, Moira L; Steyn-Ross, D Alistair; Wilson, Marcus T; Sleigh, Jamie W; Shiraishi, Yoichi
2014-04-11
Investigation of the nonlinear pattern dynamics of a reaction-diffusion system almost always requires numerical solution of the system's set of defining differential equations. Traditionally, this would be done by selecting an appropriate differential equation solver from a library of such solvers, then writing computer codes (in a programming language such as C or Matlab) to access the selected solver and display the integrated results as a function of space and time. This "code-based" approach is flexible and powerful, but requires a certain level of programming sophistication. A modern alternative is to use a graphical programming interface such as Simulink to construct a data-flow diagram by assembling and linking appropriate code blocks drawn from a library. The result is a visual representation of the inter-relationships between the state variables whose output can be made completely equivalent to the code-based solution. As a tutorial introduction, we first demonstrate application of the Simulink data-flow technique to the classical van der Pol nonlinear oscillator, and compare Matlab and Simulink coding approaches to solving the van der Pol ordinary differential equations. We then show how to introduce space (in one and two dimensions) by solving numerically the partial differential equations for two different reaction-diffusion systems: the well-known Brusselator chemical reactor, and a continuum model for a two-dimensional sheet of human cortex whose neurons are linked by both chemical and electrical (diffusive) synapses. We compare the relative performances of the Matlab and Simulink implementations. The pattern simulations by Simulink are in good agreement with theoretical predictions. Compared with traditional coding approaches, the Simulink block-diagram paradigm reduces the time and programming burden required to implement a solution for reaction-diffusion systems of equations. Construction of the block-diagram does not require high-level programming skills, and the graphical interface lends itself to easy modification and use by non-experts.
Effects of Fuel Distribution on Detonation Tube Performance
NASA Technical Reports Server (NTRS)
Perkins, Hugh Douglas
2002-01-01
A pulse detonation engine (PDE) uses a series of high frequency intermittent detonation tubes to generate thrust. The process of filling the detonation tube with fuel and air for each cycle may yield non-uniform mixtures. Lack of mixture uniformity is commonly ignored when calculating detonation tube thrust performance. In this study, detonation cycles featuring idealized non-uniform H2/air mixtures were analyzed using the SPARK two-dimensional Navier-Stokes CFD code with 7-step H2/air reaction mechanism. Mixture non-uniformities examined included axial equivalence ratio gradients, transverse equivalence ratio gradients, and partially fueled tubes. Three different average test section equivalence ratios (phi), stoichiometric (phi = 1.00), fuel lean (phi = 0.90), and fuel rich (phi = 1.10), were studied. All mixtures were detonable throughout the detonation tube. It was found that various mixtures representing the same test section equivalence ratio had specific impulses within 1 percent of each other, indicating that good fuel/air mixing is not a prerequisite for optimal detonation tube performance.
Tutorial on Reed-Solomon error correction coding
NASA Technical Reports Server (NTRS)
Geisel, William A.
1990-01-01
This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.
The moving mesh code SHADOWFAX
NASA Astrophysics Data System (ADS)
Vandenbroucke, B.; De Rijcke, S.
2016-07-01
We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.
Numerical study of shock-induced combustion in methane-air mixtures
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Rabinowitz, Martin J.
1993-01-01
The shock-induced combustion of methane-air mixtures in hypersonic flows is investigated using a new reaction mechanism consisting of 19 reacting species and 52 elementary reactions. This reduced model is derived from a full kinetic mechanism via the Detailed Reduction technique. Zero-dimensional computations of several shock-tube experiments are presented first. The reaction mechanism is then combined with a fully implicit Navier-Stokes computational fluid dynamics (CFD) code to conduct numerical simulations of two-dimensional and axisymmetric shock-induced combustion experiments of stoichiometric methane-air mixtures at a Mach number of M = 6.61. Applications to the ram accelerator concept are also presented.
Non-Ideal Detonation Properties of Ammonium Nitrate and Activated Carbon Mixtures
NASA Astrophysics Data System (ADS)
Miyake, Atsumi; Echigoya, Hiroshi; Kobayashi, Hidefumi; Ogawa, Terushige; Katoh, Katsumi; Kubota, Shiro; Wada, Yuji; Ogata, Yuji
To obtain a better understanding of detonation properties of ammonium nitrate (AN) and activated carbon (AC) mixtures, steel tube tests with several diameters were carried out for various compositions of powdered AN and AC mixtures and the influence of the charge diameter on the detonation velocity was investigated. The results showed that the detonation velocity increased with the increase of the charge diameter. The experimentally observed values were far below the theoretically predicted values made by the thermodynamic CHEETAH code and they showed so-called non-ideal detonation. The extrapolated detonation velocity of stoichiometric composition to the infinite diameter showed a good agreement with the theoretical value.
NASA Astrophysics Data System (ADS)
Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma
2017-08-01
Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids picture getting bad hit for higher values of quantization parameters. The proposed work was implemented using MATLAB and JM 18.6 reference software. The proposed work measure the performance parameters PSNR, bit rate and compression of intra frame of yuv video sequences in QCIF resolution under different values of quantization parameter with Gaussian value for diagonal down left intra prediction mode. The simulation results of proposed algorithm are tabulated and compared with previous algorithm i.e. Tian et al method. The proposed algorithm achieved reduced in bit rate averagely 30.98% and maintain consistent picture quality for QCIF sequences compared to previous algorithm i.e. Tian et al method.
Tail Biting Trellis Representation of Codes: Decoding and Construction
NASA Technical Reports Server (NTRS)
Shao. Rose Y.; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.
Rapid Prediction of Unsteady Three-Dimensional Viscous Flows in Turbopump Geometries
NASA Technical Reports Server (NTRS)
Dorney, Daniel J.
1998-01-01
A program is underway to improve the efficiency of a three-dimensional Navier-Stokes code and generalize it for nozzle and turbopump geometries. Code modifications will include the implementation of parallel processing software, incorporating new physical models and generalizing the multi-block capability to allow the simultaneous simulation of nozzle and turbopump configurations. The current report contains details of code modifications, numerical results of several flow simulations and the status of the parallelization effort.
Predictions of GPS X-Set Performance during the Places Experiment
1979-07-01
previously existing GPS X-set receiver simulation was modified to include the received signal spectrum and the receiver code correlation operation... CORRELATION OPERATION The X-set receiver simulation documented in Reference 3-1 is a direct sampled -data digital implementation of the GPS X-set...ul(t) -sin w2t From Carrier and Code Loops (wit +0 1 (t)) Figure 3-6. Simplified block diagram of code correlator operation and I-Q sampling . 6 I
Unitals and ovals of symmetric block designs in LDPC and space-time coding
NASA Astrophysics Data System (ADS)
Andriamanalimanana, Bruno R.
2004-08-01
An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.
21 CFR 1304.33 - Reports to ARCOS.
Code of Federal Regulations, 2012 CFR
2012-04-01
... substance having a stimulant effect on the central nervous system, which material, compound, mixture or... assigned to the product under the National Drug Code System of the Food and Drug Administration. (e...
21 CFR 1304.33 - Reports to ARCOS.
Code of Federal Regulations, 2014 CFR
2014-04-01
... substance having a stimulant effect on the central nervous system, which material, compound, mixture or... assigned to the product under the National Drug Code System of the Food and Drug Administration. (e...
21 CFR 1304.33 - Reports to ARCOS.
Code of Federal Regulations, 2013 CFR
2013-04-01
... substance having a stimulant effect on the central nervous system, which material, compound, mixture or... assigned to the product under the National Drug Code System of the Food and Drug Administration. (e...
21 CFR 1304.33 - Reports to ARCOS.
Code of Federal Regulations, 2011 CFR
2011-04-01
... substance having a stimulant effect on the central nervous system, which material, compound, mixture or... assigned to the product under the National Drug Code System of the Food and Drug Administration. (e...
Evaluation of Ultrafiltration Performance for Phospholipid Separation
NASA Astrophysics Data System (ADS)
Aryanti, N.; Wardhani, D. H.; Maulana, Z. S.; Roberto, D.
2017-11-01
Ultrafiltration membrane for degumming of crude palm oil has been applied as an alternative method since the membrane process required less procedure than the conventional degumming. This research focused on the examination of ultrafiltration performance for phospholipid separation from model crude palm oil degumming. Specifically, profile flux and rejection, as well as blocking mechanism, were investigated. Feed consisting of Refined Crude Palm Oil - Isopropanol - Lecithin mixtures were represented as crude palm oil degumming. Lecithin was denoted a phospholipid component, and the concentrations of lecithin in feed were varied to 0.1%, 0.2%, and 0.3%. The concentration of phospholipid was determined as phosphor content. At the concentration of lecithin in feed representing phospholipid concentration of 8,45 mg/kg, 8,45 mg/kg, 24,87 mg/kg and 57,58 mg/kg, respectively. Flux profiles confirmed that there was a flux decline during filtration. In addition, the lecithin concentrations do not significantly effect on further flux decline. Rejection characteristic and phospholipid concentration in the permeate showed that the phospholipid rejections by ultrafiltration were in the range of 23-79,5% representing permeate’s phospholipid concentration of 1,73 - 44,25 mg/kg. Evaluation of fouling mechanism by Hermia’s blocking model confirmed that the standard blocking is the dominant mechanism in the ultrafiltration of lecithin mixture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.
In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.
2006-07-01
Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as amore » part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.« less
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.
1993-01-01
The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOAR\\CR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This document is the final report describing the theoretical basis and analytical results from the ADPAC-AOACR codes developed under task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR Program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Adamczyk, John J.; Miller, Christopher J.; Arnone, Andrea; Swanson, Charles
1993-01-01
The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOACR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This report is intended to serve as a computer program user's manual for the ADPAC-AOACR codes developed under Task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.
SOLAR OPACITY CALCULATIONS USING THE SUPER-TRANSITION-ARRAY METHOD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krief, M.; Feigel, A.; Gazit, D., E-mail: menahem.krief@mail.huji.ac.il
A new opacity model has been developed based on the Super-Transition-Array (STA) method for the calculation of monochromatic opacities of plasmas in local thermodynamic equilibrium. The atomic code, named STAR (STA-Revised), is described and used to calculate spectral opacities for a solar model implementing the recent AGSS09 composition. Calculations are carried out throughout the solar radiative zone. The relative contributions of different chemical elements and atomic processes to the total Rosseland mean opacity are analyzed in detail. Monochromatic opacities and charge-state distributions are compared with the widely used Opacity Project (OP) code, for several elements near the radiation–convection interface. STARmore » Rosseland opacities for the solar mixture show a very good agreement with OP and the OPAL opacity code throughout the radiation zone. Finally, an explicit STA calculation was performed of the full AGSS09 photospheric mixture, including all heavy metals. It was shown that, due to their extremely low abundance, and despite being very good photon absorbers, the heavy elements do not affect the Rosseland opacity.« less
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
Simulation of a hydrocarbon fueled scramjet exhaust
NASA Technical Reports Server (NTRS)
Leng, J.
1982-01-01
Exhaust nozzle flow fields for a fully integrated, hydrocarbon burning scramjet were calculated for flight conditions of M (undisturbed free stream) = 4 at 6.1 km altitude and M (undisturbed free stream) = 6 at 30.5 km altitude. Equilibrium flow, frozen flow, and finite rate chemistry effects are considered. All flow fields were calculated by method of characteristics. Finite rate chemistry results were evaluated by a one dimensional code (Bittker) using streamtube area distributions extracted from the equilibrium flow field, and compared to very slow artificial rate cases for the same streamtube area distribution. Several candidate substitute gas mixtures, designed to simulate the gas dynamics of the real engine exhaust flow, were examined. Two mixtures are found to give excellent simulations of the specified exhaust flow fields when evaluated by the same method of characteristics computer code.
NASA Technical Reports Server (NTRS)
Talcott, N. A., Jr.
1977-01-01
Equations and computer code are given for the thermodynamic properties of gaseous fluorocarbons in chemical equilibrium. In addition, isentropic equilibrium expansions of two binary mixtures of fluorocarbons and argon are included. The computer code calculates the equilibrium thermodynamic properties and, in some cases, the transport properties for the following fluorocarbons: CCl2F, CCl2F2, CBrF3, CF4, CHCl2F, CHF3, CCL2F-CCl2F, CCLF2-CClF2, CF3-CF3, and C4F8. Equilibrium thermodynamic properties are tabulated for six of the fluorocarbons(CCl3F, CCL2F2, CBrF3, CF4, CF3-CF3, and C4F8) and pressure-enthalpy diagrams are presented for CBrF3.
Code of Federal Regulations, 2010 CFR
2010-04-01
... ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE REPORTING... preexisting medical conditions. (c) Device information (Form 3500A, Block D). You must submit the following... device code (refer to FDA MEDWATCH Medical Device Reporting Code Instructions); (11) Whether a report was...
Code of Federal Regulations, 2011 CFR
2011-04-01
... ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE REPORTING... preexisting medical conditions. (c) Device information (Form 3500A, Block D). You must submit the following... device code (refer to FDA MEDWATCH Medical Device Reporting Code Instructions); (11) Whether a report was...
NASA Technical Reports Server (NTRS)
Lahmeyer, Charles R. (Inventor)
1987-01-01
A Reed-Solomon decoder with dedicated hardware for five sequential algorithms was designed with overall pipelining by memory swapping between input, processing and output memories, and internal pipelining through the five algorithms. The code definition used in decoding is specified by a keyword received with each block of data so that a number of different code formats may be decoded by the same hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hathaway, M.D.; Wood, J.R.
1997-10-01
CFD codes capable of utilizing multi-block grids provide the capability to analyze the complete geometry of centrifugal compressors. Attendant with this increased capability is potentially increased grid setup time and more computational overhead with the resultant increase in wall clock time to obtain a solution. If the increase in difficulty of obtaining a solution significantly improves the solution from that obtained by modeling the features of the tip clearance flow or the typical bluntness of a centrifugal compressor`s trailing edge, then the additional burden is worthwhile. However, if the additional information obtained is of marginal use, then modeling of certainmore » features of the geometry may provide reasonable solutions for designers to make comparative choices when pursuing a new design. In this spirit a sequence of grids were generated to study the relative importance of modeling versus detailed gridding of the tip gap and blunt trailing edge regions of the NASA large low-speed centrifugal compressor for which there is considerable detailed internal laser anemometry data available for comparison. The results indicate: (1) There is no significant difference in predicted tip clearance mass flow rate whether the tip gap is gridded or modeled. (2) Gridding rather than modeling the trailing edge results in better predictions of some flow details downstream of the impeller, but otherwise appears to offer no great benefits. (3) The pitchwise variation of absolute flow angle decreases rapidly up to 8% impeller radius ratio and much more slowly thereafter. Although some improvements in prediction of flow field details are realized as a result of analyzing the actual geometry there is no clear consensus that any of the grids investigated produced superior results in every case when compared to the measurements. However, if a multi-block code is available, it should be used, as it has the propensity for enabling better predictions than a single block code.« less
NASA Technical Reports Server (NTRS)
1998-01-01
Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.
Program EAGLE User’s Manual. Volume 3. Grid Generation Code
1988-09-01
15 1. ompps.te Grid Structure ..... .. .................. . 15 2. Block Interfaces ......... ...................... . 18 3. Fundmental ...in principle it is possible to establish a correspondence between any physical region and a single empty rectangular block for general three...differences. Since this second surrounding layer is not involved in the grid generation, no further account will be taken of its presence in the present
Total x-ray power measurements in the Sandia LIGA program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Michael E.; Ting, Aili
2005-08-01
Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different frommore » the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power monitors was made at the ALS, beamline 3.3.1. This work showed that a modification of a commercially available, heat flux sensor could result in a simple, direct reading beam power meter that could be a useful for monitoring total X-ray power in Sandia's LIGA exposure stations at the ALS, APS and Stanford Synchrotron Radiation Laboratory (SSRL).« less
Samal, Monica; Mohapatra, Priya Ranjan; Yun, Kyu Sik
2015-09-01
A diblock copolymer poly(2-vinyl pyridine)-b-poly(n-hexyl isocyanate) (P2VP-b-PHIC) is used for the present study. It has two blocks; a rod-shaped PHIC block that adopts a helical conformation, and a coil shaped P2VP block. In a polar solvent such as THF both PHIC and P2VP blocks are soluble. In mixtures of two solvents, such as THF and methanol, while the solubility of P2VP component is augmented that of PHIC is decreased leading to formation of reversed micelles. The pyridine nitrogen in P2VP block is a reactive site. It forms complexes with a suitable metal ion, such as Cd2+. The micelle is employed as a nanoreactor for synthesis of CdS quantum dot (QD). In this paper, the micellization behaviour of the copolymer and the use of the micelles for synthesis and controlled growth of CdS nanocrystals are demonstrated.
CFD simulation of coaxial injectors
NASA Technical Reports Server (NTRS)
Landrum, D. Brian
1993-01-01
The development of improved performance models for the Space Shuttle Main Engine (SSME) is an important, ongoing program at NASA MSFC. These models allow prediction of overall system performance, as well as analysis of run-time anomalies which might adversely affect engine performance or safety. Due to the complexity of the flow fields associated with the SSME, NASA has increasingly turned to Computational Fluid Dynamics (CFD) techniques as modeling tools. An important component of the SSME system is the fuel preburner, which consists of a cylindrical chamber with a plate containing 264 coaxial injector elements at one end. A fuel rich mixture of gaseous hydrogen and liquid oxygen is injected and combusted in the chamber. This process preheats the hydrogen fuel before it enters the main combustion chamber, powers the hydrogen turbo-pump, and provides a heat dump for nozzle cooling. Issues of interest include the temperature and pressure fields at the turbine inlet and the thermal compatibility between the preburner chamber and injector plate. Performance anomalies can occur due to incomplete combustion, blocked injector ports, etc. The performance model should include the capability to simulate the effects of these anomalies. The current approach to the numerical simulation of the SSME fuel preburner flow field is to use a global model based on the MSFC sponsored FNDS code. This code does not have the capabilities of modeling several aspects of the problem such as detailed modeling of the coaxial injectors. Therefore, an effort has been initiated to develop a detailed simulation of the preburner coaxial injectors and provide gas phase boundary conditions just downstream of the injector face as input to the FDNS code. This simulation should include three-dimensional geometric effects such as proximity of injectors to baffles and chamber walls and interaction between injectors. This report describes an investigation into the numerical simulation of GH2/LOX coaxial injectors. The following sections will discuss the physical aspects of injectors, the CFD code employed, and preliminary results of a simulation of a single coaxial injector for which experimental data is available. It is hoped that this work will lay the foundation for the development of a unique and useful tool to support the SSME program.
Anthropogenic endocrine disrupting chemicals (EDCs) or chemical mixtures alter androgen-response tissues via a variety of mechanisms including mimicking or blocking the action of the natural ligand to the androgen receptor (AR), inhibiting steroid hormone synthesis or by acting a...
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Algorithm 782 : codes for rank-revealing QR factorizations of dense matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science
1998-06-01
This article describes a suite of codes as well as associated testing and timing drivers for computing rank-revealing QR (RRQR) factorizations of dense matrices. The main contribution is an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy and improved versions of the RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang, respectively, We highlight usage and features of these codes.
1985-10-01
NOTE3 1W. KFY OORDS (Continwo =n reverse aide If necesesar aid ldwttlfy by" block ntmber) •JW7 Regions, COM-EOM Region Ident• fication GIFT Material...technique of mobna.tcri• i Geometr- (Com-Geom). The Com-Gem data is used as input to the Geometric Inf• •cation for Targets ( GIFT ) computer code to... GIFT ) 2 3 computer code. This report documents the combinatorial geometry (Com-Geom) target description data which is the input data for the GIFT code
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Multi-blocking strategies for the INS3D incompressible Navier-Stokes code
NASA Technical Reports Server (NTRS)
Gatlin, Boyd
1990-01-01
With the continuing development of bigger and faster supercomputers, computational fluid dynamics (CFD) has become a useful tool for real-world engineering design and analysis. However, the number of grid points necessary to resolve realistic flow fields numerically can easily exceed the memory capacity of available computers. In addition, geometric shapes of flow fields, such as those in the Space Shuttle Main Engine (SSME) power head, may be impossible to fill with continuous grids upon which to obtain numerical solutions to the equations of fluid motion. The solution to this dilemma is simply to decompose the computational domain into subblocks of manageable size. Computer codes that are single-block by construction can be modified to handle multiple blocks, but ad-hoc changes in the FORTRAN have to be made for each geometry treated. For engineering design and analysis, what is needed is generalization so that the blocking arrangement can be specified by the user. INS3D is a computer program for the solution of steady, incompressible flow problems. It is used frequently to solve engineering problems in the CFD Branch at Marshall Space Flight Center. INS3D uses an implicit solution algorithm and the concept of artificial compressibility to provide the necessary coupling between the pressure field and the velocity field. The development of generalized multi-block capability in INS3D is described.
The analysis of convolutional codes via the extended Smith algorithm
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Onyszchuk, I.
1993-01-01
Convolutional codes have been the central part of most error-control systems in deep-space communication for many years. Almost all such applications, however, have used the restricted class of (n,1), also known as 'rate 1/n,' convolutional codes. The more general class of (n,k) convolutional codes contains many potentially useful codes, but their algebraic theory is difficult and has proved to be a stumbling block in the evolution of convolutional coding systems. In this article, the situation is improved by describing a set of practical algorithms for computing certain basic things about a convolutional code (among them the degree, the Forney indices, a minimal generator matrix, and a parity-check matrix), which are usually needed before a system using the code can be built. The approach is based on the classic Forney theory for convolutional codes, together with the extended Smith algorithm for polynomial matrices, which is introduced in this article.
A comparison of retrobulbar block, sub-Tenon block, and topical anesthesia during cataract surgery.
Ryu, Jung-Hee; Kim, Minsuk; Bahk, Jae-Hyon; Do, Sang-Hwan; Cheong, Il-Young; Kim, Yong-Chul
2009-01-01
This randomized, double-blinded, prospective study was performed to compare the intraoperative hemodynamic variables and the patient-reported outcomes, such as intra- and postoperative analgesia and patient satisfaction, of retrobulbar block, sub-Tenon block, and topical anesthesia during cataract surgery under monitored anesthesia care. Eighty-one patients, ASA physical status I-III, undergoing elective cataract surgery under monitored anesthesia care, aged between 43 and 78 years, were randomly assigned to three groups: retrobulbar block (group R), sub-Tenon block (group S), or topical anesthesia (group T). Three minutes after the start of monitored anesthesia care with lidocaine-propofol-remifentanil mixture, an ophthalmologist performed regional anesthesia. Intraoperative hemodynamics, pain score, and patients' satisfaction with the anesthetic experiences were recorded by a study-blinded anesthesiologist. Mean arterial pressure and heart rate in group R were significantly higher than those in groups S and T during and just after the regional block (p<0.05). Group R required smaller dosage of patient controlled sedation and fewer supplemental bolus doses than groups S and T (p<0.05). On the other hand, group S showed the highest satisfaction scores among the three groups (p<0.05). Sub-Tenon block seems to be better than retrobulbar block and topical anesthesia in patient satisfaction though adequate analgesia was achieved after retrobulbar block during cataract surgery under monitored anesthesia care.
Shadowfax: Moving mesh hydrodynamical integration code
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert
2016-05-01
Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).
The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Weilmuenster, Kenneth J.
1993-01-01
As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).
Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure
NASA Astrophysics Data System (ADS)
Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori
In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.
Protograph LDPC Codes for the Erasure Channel
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Dolinar, Samuel J.; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews the use of protograph Low Density Parity Check (LDPC) codes for erasure channels. A protograph is a Tanner graph with a relatively small number of nodes. A "copy-and-permute" operation can be applied to the protograph to obtain larger derived graphs of various sizes. For very high code rates and short block sizes, a low asymptotic threshold criterion is not the best approach to designing LDPC codes. Simple protographs with much regularity and low maximum node degrees appear to be the best choices Quantized-rateless protograph LDPC codes can be built by careful design of the protograph such that multiple puncturing patterns will still permit message passing decoding to proceed
The Design and Implementation of a Read Prediction Buffer
1992-12-01
City, State, and ZIP Code) 7b ADDRESS (City, State. and ZIP Code) 8a. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT... 9 E. THESIS STRUCTURE.. . .... ............... 9 II. READ PREDICTION ALGORITHM AND BUFFER DESIGN 10 A. THE READ PREDICTION ALGORITHM...29 Figure 9 . Basic Multiplexer Cell .... .......... .. 30 Figure 10. Block Diagram Simulation Labels ......... 38 viii I. INTRODUCTION A
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
1987-08-18
NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP I Synthetic enzymes...chymotrypsin; molecular modeling; 03 peptide synthesis 19. ABSTRACT (Continue on reverse if necessary and identify by block number) The object of this...for AChE. Additionally, synthetic models ofcL- chymotrypsin built using cyclo- dextrins show catalytic activity over a limited pH range.2 Using L
Building Toward an Unmanned Aircraft System Training Strategy
2014-01-01
37 4.7. Global Hawk UAS...either trained into a new career field or cross-trained from another Air Force Specialty Code. Those for Global Hawk come from the imagery analyst...Service(s) Capability/Mission rQ-4A Global hawk/ BAMS-D Block 10 9 3 USAF navy ISr Maritime domain awareness (navy) rQ-4B Global hawk Block 20/30 15 3
Anomaly-Based Intrusion Detection Systems Utilizing System Call Data
2012-03-01
Functionality Description Persistence mechanism Mimicry technique Camouflage malware image: • renaming its image • appending its image to victim...particular industrial plant . Exactly which one was targeted still remains unknown, however a majority of the attacks took place in Iran [24]. Due... plant to unstable phase and eventually physical damage. It is interesting to note that a particular block of code - block DB8061 is automatically
Numerical Simulation of the Detonation of Condensed Explosives
NASA Astrophysics Data System (ADS)
Wang, Cheng; Ye, Ting; Ning, Jianguo
Detonation process of a condensed explosive was simulated using a finite difference method. Euler equations were applied to describe the detonation flow field, an ignition and growth model for the chemical reaction and Jones-Wilkins-Lee (JWL) equations of state for the state of explosives and detonation products. Based on the simple mixture rule that assumes the reacting explosives to be a mixture of the reactant and product components, 1D and 2D codes were developed to simulate the detonation process of high explosive PBX9404. The numerical results are in good agreement with the experimental results, which demonstrates that the finite difference method, mixture rule and chemical reaction proposed in this paper are adequate and feasible.
Good Trellises for IC Implementation of Viterbi Decoders for Linear Block Codes
NASA Technical Reports Server (NTRS)
Moorthy, Hari T.; Lin, Shu; Uehara, Gregory T.
1997-01-01
This paper investigates trellis structures of linear block codes for the integrated circuit (IC) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper-bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called add-compare-select (ACS)-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the very large scale integration (VISI) complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a nonminimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
Good trellises for IC implementation of viterbi decoders for linear block codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Uehara, Gregory T.
1996-01-01
This paper investigates trellis structures of linear block codes for the IC (integrated circuit) implementation of Viterbi decoders capable of achieving high decoding speed while satisfying a constraint on the structural complexity of the trellis in terms of the maximum number of states at any particular depth. Only uniform sectionalizations of the code trellis diagram are considered. An upper bound on the number of parallel and structurally identical (or isomorphic) subtrellises in a proper trellis for a code without exceeding the maximum state complexity of the minimal trellis of the code is first derived. Parallel structures of trellises with various section lengths for binary BCH and Reed-Muller (RM) codes of lengths 32 and 64 are analyzed. Next, the complexity of IC implementation of a Viterbi decoder based on an L-section trellis diagram for a code is investigated. A structural property of a Viterbi decoder called ACS-connectivity which is related to state connectivity is introduced. This parameter affects the complexity of wire-routing (interconnections within the IC). The effect of five parameters namely: (1) effective computational complexity; (2) complexity of the ACS-circuit; (3) traceback complexity; (4) ACS-connectivity; and (5) branch complexity of a trellis diagram on the VLSI complexity of a Viterbi decoder is investigated. It is shown that an IC implementation of a Viterbi decoder based on a non-minimal trellis requires less area and is capable of operation at higher speed than one based on the minimal trellis when the commonly used ACS-array architecture is considered.
Hinde, Jesse; Bray, Jeremy; Kaiser, David; Mallonee, Erin
2017-02-01
To examine how institutional constraints, comprising federal actions and states' substance abuse policy environments, influence states' decisions to activate Medicaid reimbursement codes for screening and brief intervention for risky substance use in the United States. A discrete-time duration model was used to estimate the effect of institutional constraints on the likelihood of activating the Medicaid reimbursement codes. Primary constraints included federal Screening, Brief Intervention and Referral to Treatment (SBIRT) grant funding, substance abuse priority, economic climate, political climate and interstate diffusion. Study data came from publicly available secondary data sources. Federal SBIRT grant funding did not affect significantly the likelihood of activation (P = 0.628). A $1 increase in per-capita block grant funding was associated with a 10-percentage point reduction in the likelihood of activation (P = 0.003) and a $1 increase in per-capita state substance use disorder expenditures was associated with a 2-percentage point increase in the likelihood of activation (P = 0.004). States with enacted parity laws (P = 0.016) and a Democratic-controlled state government were also more likely to activate the codes. In the United States, the determinants of state activation of Medicaid Screening, Brief Intervention and Referral to Treatment (SBIRT) reimbursement codes are complex, and include more than financial considerations. Federal block grant funding is a strong disincentive to activating the SBIRT reimbursement codes, while more direct federal SBIRT grant funding has no detectable effects. © 2017 Society for the Study of Addiction.
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
SimITK: visual programming of the ITK image-processing library within Simulink.
Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin
2014-04-01
The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adrian Miron; Joshua Valentine; John Christenson
2009-10-01
The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less
Non-Genomic Effects of Xenoestrogen Mixtures
Viñas, René; Jeng, Yow-Jiun; Watson, Cheryl S.
2012-01-01
Xenoestrogens (XEs) are chemicals derived from a variety of natural and anthropogenic sources that can interfere with endogenous estrogens by either mimicking or blocking their responses via non-genomic and/or genomic signaling mechanisms. Disruption of estrogens’ actions through the less-studied non-genomic pathway can alter such functional end points as cell proliferation, peptide hormone release, catecholamine transport, and apoptosis, among others. Studies of potentially adverse effects due to mixtures and to low doses of endocrine-disrupting chemicals have recently become more feasible, though few so far have included actions via the non-genomic pathway. Physiologic estrogens and XEs evoke non-monotonic dose responses, with different compounds having different patterns of actions dependent on concentration and time, making mixture assessments all the more challenging. In order to understand the spectrum of toxicities and their mechanisms, future work should focus on carefully studying individual and mixture components across a range of concentrations and cellular pathways in a variety of tissue types. PMID:23066391
Qiu, Guo-Hua
2016-01-01
In this review, the protective function of the abundant non-coding DNA in the eukaryotic genome is discussed from the perspective of genome defense against exogenous nucleic acids. Peripheral non-coding DNA has been proposed to act as a bodyguard that protects the genome and the central protein-coding sequences from ionizing radiation-induced DNA damage. In the proposed mechanism of protection, the radicals generated by water radiolysis in the cytosol and IR energy are absorbed, blocked and/or reduced by peripheral heterochromatin; then, the DNA damage sites in the heterochromatin are removed and expelled from the nucleus to the cytoplasm through nuclear pore complexes, most likely through the formation of extrachromosomal circular DNA. To strengthen this hypothesis, this review summarizes the experimental evidence supporting the protective function of non-coding DNA against exogenous nucleic acids. Based on these data, I hypothesize herein about the presence of an additional line of defense formed by small RNAs in the cytosol in addition to their bodyguard protection mechanism in the nucleus. Therefore, exogenous nucleic acids may be initially inactivated in the cytosol by small RNAs generated from non-coding DNA via mechanisms similar to the prokaryotic CRISPR-Cas system. Exogenous nucleic acids may enter the nucleus, where some are absorbed and/or blocked by heterochromatin and others integrate into chromosomes. The integrated fragments and the sites of DNA damage are removed by repetitive non-coding DNA elements in the heterochromatin and excluded from the nucleus. Therefore, the normal eukaryotic genome and the central protein-coding sequences are triply protected by non-coding DNA against invasion by exogenous nucleic acids. This review provides evidence supporting the protective role of non-coding DNA in genome defense. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Brandis, Aaron; McIntyre, Timothy J.
2011-01-01
Thermochemical relaxation behind a normal shock in Mars and Titan gas mixtures is simulated using a CFD solver, DPLR, for a hemisphere of 1 m radius; the thermochemical relaxation along the stagnation streamline is considered equivalent to the flow behind a normal shock. Flow simulations are performed for a Titan gas mixture (98% N2, 2% CH4 by volume) for shock speeds of 5.7 and 7.6 km/s and pressures ranging from 20 to 1000 Pa, and a Mars gas mixture (96% CO2, and 4% N2 by volume) for a shock speed of 8.6 km/s and freestream pressure of 13 Pa. For each case, the temperatures and number densities of chemical species obtained from the CFD flow predictions are used as an input to a line-by-line radiation code, NEQAIR. The NEQAIR code is then used to compute the spatial distribution of volumetric radiance starting from the shock front to the point where thermochemical equilibrium is nominally established. Computations of volumetric spectral radiance assume Boltzmann distributions over radiatively linked electronic states of atoms and molecules. The results of these simulations are compared against experimental data acquired in the X2 facility at the University of Queensland, Australia. The experimental measurements were taken over a spectral range of 310-450 nm where the dominant contributor to radiation is the CN violet band system. In almost all cases, the present approach of computing the spatial variation of post-shock volumetric radiance by applying NEQAIR along a stagnation line computed using a high-fidelity flow solver with good spatial resolution of the relaxation zone is shown to replicate trends in measured relaxation of radiance for both Mars and Titan gas mixtures.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
The study on dynamic cadastral coding rules based on kinship relationship
NASA Astrophysics Data System (ADS)
Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng
2007-06-01
Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.
Coding and decoding for code division multiple user communication systems
NASA Technical Reports Server (NTRS)
Healy, T. J.
1985-01-01
A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.
Studies on the muscle-paralyzing components of the juice of the banana plant.
Singh, Y N; Inman, W D; Johnson, A; Linnell, E J
1993-01-01
The stem juice of the banana plant (Musa species) has been used as an arrow poison by African tribesmen. Lyophilized, partially purified extracts of the juice augment and then block both directly and indirectly evoked contractions of the mouse diaphragm. We have isolated, purified and determined the chemical composition of the active ingredients, and characterized their pharmacological activity. The lyophilized sample was extracted with a methanol-water (MeOH-H2O) (50/50) mixture and vacuum filtered. The filtrate was rotary evaporated and crystallized in a MeOH-H2O mixture to yield potassium nitrate crystals (melting point 332-334 degrees C). The filtrate was concentrated and chromatographed over Sephadex LH-20 gel using MeOH-H2O (40/60) as the eluent. The active component was found to be magnesium nitrate crystals (melting point 87-89 degrees C). In the mouse isolated phrenic nerve-hemidiaphragm preparation, the pharmacological profile of the first component was similar to that for authentic potassium nitrate which augments in low concentrations, and in higher concentrations augments, and then blocks both directly evoked muscle contraction the neuromuscular transmission. The second component had a profile of activity similar to that for authentic magnesium nitrate which only blocks neuromuscular transmission. It can be concluded that the two major active principles in the banana stem juice are potassium nitrate and magnesium nitrate.
Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram
2016-01-01
Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise representation (e.g., when the avoidance of a loss in a loss-context is coded the same as receipt of a gain in a gain-context). We investigated an intermediate form of adaptation that is efficient while maintaining information about received gains and avoided losses. We found that frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Importantly, adaptation was intermediate, in line with influential models of reference dependence in behavioral economics. PMID:27683899
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.
We developed two new EOS additions to the TOUGH+ family of codes, the RealGasH2O and RealGas. The RealGasH2O EOS option describes the non-isothermal two-phase flow of water and a real gas mixture in gas reservoirs, with a particular focus in ultra-tight (such as tight-sand and sh...
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Kwok, R.; Curlander, J. C.
1987-01-01
Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
High-efficiency reconciliation for continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, Zengliang; Yang, Shenshen; Li, Yongmin
2017-04-01
Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.
Progress of IRSN R&D on ITER Safety Assessment
NASA Astrophysics Data System (ADS)
Van Dorsselaere, J. P.; Perrault, D.; Barrachin, M.; Bentaib, A.; Gensdarmes, F.; Haeck, W.; Pouvreau, S.; Salat, E.; Seropian, C.; Vendel, J.
2012-08-01
The French "Institut de Radioprotection et de Sûreté Nucléaire" (IRSN), in support to the French "Autorité de Sûreté Nucléaire", is analysing the safety of ITER fusion installation on the basis of the ITER operator's safety file. IRSN set up a multi-year R&D program in 2007 to support this safety assessment process. Priority has been given to four technical issues and the main outcomes of the work done in 2010 and 2011 are summarized in this paper: for simulation of accident scenarios in the vacuum vessel, adaptation of the ASTEC system code; for risk of explosion of gas-dust mixtures in the vacuum vessel, adaptation of the TONUS-CFD code for gas distribution, development of DUST code for dust transport, and preparation of IRSN experiments on gas inerting, dust mobilization, and hydrogen-dust mixtures explosion; for evaluation of the efficiency of the detritiation systems, thermo-chemical calculations of tritium speciation during transport in the gas phase and preparation of future experiments to evaluate the most influent factors on detritiation; for material neutron activation, adaptation of the VESTA Monte Carlo depletion code. The first results of these tasks have been used in 2011 for the analysis of the ITER safety file. In the near future, this R&D global programme may be reoriented to account for the feedback of the latter analysis or for new knowledge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
Experimental validation of thermodynamic mixture rules at extreme pressures and densities
NASA Astrophysics Data System (ADS)
Bradley, P. A.; Loomis, E. N.; Merritt, E. C.; Guzik, J. A.; Denne, P. H.; Clark, T. T.
2018-01-01
Accurate modeling of a mixed material Equation of State (EOS) at high pressures (˜1 to 100 Mbar) is critical for simulating inertial confinement fusion and high energy density systems. This paper presents a comparison of two mixing rule models to the experiment to assess their applicability in this regime. The shock velocities of polystyrene, aluminum, and nickel aluminide (NiAl) were measured at a shock pressure of ˜3 TPa (˜30 Mbar) on the Omega EP laser facility (Laboratory for Laser Energetics, University of Rochester, New York). The resultant shock velocities were compared to those derived from the RAGE (Eulerian) hydrodynamics code to validate various mixing rules used to construct an EOS for NiAl. The simulated shock transit time through the sample (Al or NiAl) matched the measurements to within the ±45ps measurement uncertainty. The law of partial volume (Amagat) and the law of partial pressure (Dalton) mixture rules provided equally good matches to the NiAl shock data. Other studies showed that the Amagat mixing rule is superior, and we recommend it since our results also show a satisfactory match. The comparable quality of the simulation to data for the Al and NiAl samples implies that a mixture rule can supply an EOS for plasma mixtures with adequate fidelity for simulations where mixing takes place, such as advective mix in an Eulerian code or when two materials are mixed together via diffusion, turbulence, or other physical processes.
Experimental validation of thermodynamic mixture rules at extreme pressures and densities
Bradley, Paul Andrew; Loomis, Eric Nicholas; Merritt, Elizabeth Catherine; ...
2018-01-19
Accurate modeling of a mixed material Equation of State (EOS) at high pressures (~1 to 100 Mbar) is critical for simulating inertial confinement fusion and high energy density systems. Here, this paper presents a comparison of two mixing rule models to the experiment to assess their applicability in this regime. The shock velocities of polystyrene, aluminum, and nickel aluminide (NiAl) were measured at a shock pressure of ~3 TPa (~30 Mbar) on the Omega EP laser facility (Laboratory for Laser Energetics, University of Rochester, New York). The resultant shock velocities were compared to those derived from the RAGE (Eulerian) hydrodynamicsmore » code to validate various mixing rules used to construct an EOS for NiAl. The simulated shock transit time through the sample (Al or NiAl) matched the measurements to within the ±45ps measurement uncertainty. The law of partial volume (Amagat) and the law of partial pressure (Dalton) mixture rules provided equally good matches to the NiAl shock data. Other studies showed that the Amagat mixing rule is superior, and we recommend it since our results also show a satisfactory match. In conclusion, the comparable quality of the simulation to data for the Al and NiAl samples implies that a mixture rule can supply an EOS for plasma mixtures with adequate fidelity for simulations where mixing takes place, such as advective mix in an Eulerian code or when two materials are mixed together via diffusion, turbulence, or other physical processes.« less
Mackay, Richard; Sammells, Anthony F.
2000-01-01
Ceramics of the composition: Ln.sub.x Sr.sub.2-x-y Ca.sub.y B.sub.z M.sub.2-z O.sub.5+.delta. where Ln is an element selected from the fblock lanthanide elements and yttrium or mixtures thereof; B is an element selected from Al, Ga, In or mixtures thereof; M is a d-block transition element of mixtures thereof; 0.01.ltoreq.x.ltoreq.1.0; 0.01.ltoreq.y.ltoreq.0.7; 0.01.ltoreq.z.ltoreq.1.0 and .delta. is a number that varies to maintain charge neutrality are provided. These ceramics are useful in ceramic membranes and exhibit high ionic conductivity, high chemical stability under catalytic membrane reactor conditions and low coefficients of expansion. The materials of the invention are particularly useful in producing synthesis gas.
Tissue Distribution, Excretion, and Hepatic Biotransformation of Microcystin-LR in Mice
1990-07-09
TO 900709 43 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD...GROUP SUB-GROUP Imicrocystin-LR, pharmacokinetics, biotransformation, protein binding 19, ABSTRACT (Continue on reverse if necessary and identify by block...the column measured with blue dextrin . Fig. 6. Econo-Pac 1ODG desalting column profile of hepatic- cytosolic radiolabel under denaturing conditions
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-17
PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.
An Advanced simulation Code for Modeling Inductive Output Tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thuc Bui; R. Lawrence Ives
2012-04-27
During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-01-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-08-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
NASA Astrophysics Data System (ADS)
Lei, Ted Chih-Wei; Tseng, Fan-Shuo
2017-07-01
This paper addresses the problem of high-computational complexity decoding in traditional Wyner-Ziv video coding (WZVC). The key focus is the migration of two traditionally high-computationally complex encoder algorithms, namely motion estimation and mode decision. In order to reduce the computational burden in this process, the proposed architecture adopts the partial boundary matching algorithm and four flexible types of block mode decision at the decoder. This approach does away with the need for motion estimation and mode decision at the encoder. The experimental results show that the proposed padding block-based WZVC not only decreases decoder complexity to approximately one hundredth that of the state-of-the-art DISCOVER decoding but also outperforms DISCOVER codec by up to 3 to 4 dB.
NASA Technical Reports Server (NTRS)
Wohlen, R. L.
1976-01-01
Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.
Combinatorics associated with inflections and bitangents of plane quartics
NASA Astrophysics Data System (ADS)
Gizatullin, M. Kh
2013-08-01
After a preliminary survey and a description of some small Steiner systems from the standpoint of the theory of invariants of binary forms, we construct a binary Golay code (of length 24) using ideas from J. Grassmann's thesis of 1875. One of our tools is a pair of disjoint Fano planes. Another application of such pairs and properties of plane quartics is a construction of a new block design on 28 objects. This block design is a part of a dissection of the set of 288 Aronhold sevens. The dissection distributes the Aronhold sevens into 8 disjoint block designs of this type.
Parallel Gaussian elimination of a block tridiagonal matrix using multiple microcomputers
NASA Technical Reports Server (NTRS)
Blech, Richard A.
1989-01-01
The solution of a block tridiagonal matrix using parallel processing is demonstrated. The multiprocessor system on which results were obtained and the software environment used to program that system are described. Theoretical partitioning and resource allocation for the Gaussian elimination method used to solve the matrix are discussed. The results obtained from running 1, 2 and 3 processor versions of the block tridiagonal solver are presented. The PASCAL source code for these solvers is given in the appendix, and may be transportable to other shared memory parallel processors provided that the synchronization outlines are reproduced on the target system.
1981-12-01
VALUES OF EACH BLOCK C TO BE PRINTED. C C ASTORE - 256 VALUE REAL ARRAY USED TO C STORE THE CONVERTED VOLTAGES C FROM ISTORE C C SBLK- STARTING BLOCK...BETWEEN -5.00 C VOLTS AND +5.00 VOLTS. C ~c INTEGER IFILE(13),SBLK,CBLK,ISTORE(256),ST(22), " IBLOCKSJFILE(13),EBLK t :- C REAL ASTORE (256) C C ENTER...CONVERT EACH BLOCK TO BE PRINTED INTO VOLTAGES AND C STORE IN THE ARRAY ASTORE . WRITE ASTORE INTO THE C FILE NAMED BY JFILE C C DO 60 I=1,256 ASTORE (I
Image compression using quad-tree coding with morphological dilation
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Jiang, Weiwei; Jiao, Licheng; Wang, Lei
2007-11-01
In this paper, we propose a new algorithm which integrates morphological dilation operation to quad-tree coding, the purpose of doing this is to compensate each other's drawback by using quad-tree coding and morphological dilation operation respectively. New algorithm can not only quickly find the seed significant coefficient of dilation but also break the limit of block boundary of quad-tree coding. We also make a full use of both within-subband and cross-subband correlation to avoid the expensive cost of representing insignificant coefficients. Experimental results show that our algorithm outperforms SPECK and SPIHT. Without using any arithmetic coding, our algorithm can achieve good performance with low computational cost and it's more suitable to mobile devices or scenarios with a strict real-time requirement.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.
2007-04-01
In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.
WETAIR: A computer code for calculating thermodynamic and transport properties of air-water mixtures
NASA Technical Reports Server (NTRS)
Fessler, T. E.
1979-01-01
A computer program subroutine, WETAIR, was developed to calculate the thermodynamic and transport properties of air water mixtures. It determines the thermodynamic state from assigned values of temperature and density, pressure and density, temperature and pressure, pressure and entropy, or pressure and enthalpy. The WETAIR calculates the properties of dry air and water (steam) by interpolating to obtain values from property tables. Then it uses simple mixing laws to calculate the properties of air water mixtures. Properties of mixtures with water contents below 40 percent (by mass) can be calculated at temperatures from 273.2 to 1497 K and pressures to 450 MN/sq m. Dry air properties can be calculated at temperatures as low as 150 K. Water properties can be calculated at temperatures to 1747 K and pressures to 100 MN/sq m. The WETAIR is available in both SFTRAN and FORTRAN.
Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control
NASA Technical Reports Server (NTRS)
Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.
2005-01-01
A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.
NASA Technical Reports Server (NTRS)
Kemp, N. H.; Lewis, P. F.
1980-01-01
The development of a computer program for the design of the thrust chamber for a CW laser heated thruster was examined. Hydrodgen was employed as the propellant gas and high temperature absorber. The laser absorption coefficient of the mixture/laser radiation combination is given in temperature and species densities. Radiative and absorptive properties are given to determine radiation from such gas mixtures. A computer code for calculating the axisymmetric channel flow of a gas mixture in chemical equilibrium, and laser energy absorption and convective and radiative heating is described. It is concluded that: (1) small amounts of cesium seed substantially increase the absorption coefficient of hydrogen; (2) cesium is a strong radiator and contributes greatly to radiation of cesium seeded hydrogen; (3) water vapor is a poor absorber; and (4) for 5.3mcm radiation, both H2O/CO and NO/CO seeded hydrogen mixtures are good absorbers.
[Quantitative analysis of nucleotide mixtures with terahertz time domain spectroscopy].
Zhang, Zeng-yan; Xiao, Ti-qiao; Zhao, Hong-wei; Yu, Xiao-han; Xi, Zai-jun; Xu, Hong-jie
2008-09-01
Adenosine, thymidine, guanosine, cytidine and uridine form the building blocks of ribose nucleic acid (RNA) and deoxyribose nucleic acid (DNA). Nucleosides and their derivants are all have biological activities. Some of them can be used as medicine directly or as materials to synthesize other medicines. It is meaningful to detect the component and content in nucleosides mixtures. In the present paper, components and contents of the mixtures of adenosine, thymidine, guanosine, cytidine and uridine were analyzed. THz absorption spectra of pure nucleosides were set as standard spectra. The mixture's absorption spectra were analyzed by linear regression with non-negative constraint to identify the components and their relative content in the mixtures. The experimental and analyzing results show that it is simple and effective to get the components and their relative percentage in the mixtures by terahertz time domain spectroscopy with a relative error less than 10%. Component which is absent could be excluded exactly by this method, and the error sources were also analyzed. All the experiments and analysis confirms that this method is of no damage or contamination to the sample. This means that it will be a simple, effective and new method in biochemical materials analysis, which extends the application field of THz-TDS.
Development of a cryogenic mixed fluid J-T cooling computer code, 'JTMIX'
NASA Technical Reports Server (NTRS)
Jones, Jack A.
1991-01-01
An initial study was performed for analyzing and predicting the temperatures and cooling capacities when mixtures of fluids are used in Joule-Thomson coolers and in heat pipes. A computer code, JTMIX, was developed for mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer code, DDMIX, it has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60 K when neon is added.
Molecular Effects on Coacervate-Driven Block Copolymer Self Assembly
NASA Astrophysics Data System (ADS)
Lytle, Tyer; Radhakrishna, Mithun; Sing, Charles
Two oppositely charged polymers can undergo associative phase separation in a salt solution in a process known as \\x98complex coacervation. Recent work has used this as a motif to control the self-assembly behavior of a mixture of oppositely-charged block copolymers which form nanoscale structures. The materials formed from these complex coacervate-block copolymers (BCPs) have potential use as drug delivery systems, gels, and sensors. We have developed a hybrid Monte Carlo-Single Chain in a Mean Field (MC-SCMF) simulation method that is able to determine morphological phase diagrams for BCPs. This technique is an efficient way to calculate morphological phase diagrams and provides a clear link between molecular level features and self-assembly behaviors. Morphological phase diagrams showing the effects of polymer concentration, salt concentration, chain length, and charge-block fraction at large charge densities on self-assembly behavior have been determined. An unexpected phase transition from disorder to hexagonal packing at large salt concentrations has been observed for charge-block fractions equal to and larger than 0.5. This is attributed to the salt filling space stabilizing the morphology of the BCP.
Li, Yuk Mun; Srinivasan, Divya; Vaidya, Parth; Gu, Yibei; Wiesner, Ulrich
2016-10-01
Deviating from the traditional formation of block copolymer derived isoporous membranes from one block copolymer chemistry, here asymmetric membranes with isoporous surface structure are derived from two chemically distinct block copolymers blended during standard membrane fabrication. As a first proof of principle, the fabrication of asymmetric membranes is reported, which are blended from two chemically distinct triblock terpolymers, poly(isoprene-b-styrene-b-(4-vinyl)pyridine) (ISV) and poly(isoprene-b-styrene-b-(dimethylamino)ethyl methacrylate) (ISA), differing in the pH-responsive hydrophilic segment. Using block copolymer self-assembly and nonsolvent induced phase separation process, pure and blended membranes are prepared by varying weight ratios of ISV to ISA. Pure and blended membranes exhibit a thin, selective layer of pores above a macroporous substructure. Observed permeabilities at varying pH values of blended membranes depend on relative triblock terpolymer composition. These results open a new direction for membrane fabrication through the use of mixtures of chemically distinct block copolymers enabling the tailoring of membrane surface chemistries and functionalities. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Correia, Andrew W; Peters, Junenette L; Levy, Jonathan I; Melly, Steven; Dominici, Francesca
2013-10-08
To investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (≥ 65 years) residing near airports. Multi-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code. 2218 zip codes surrounding 89 airports in the contiguous states. 6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged ≥ 65 years) residing near airports in 2009. Percentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density. Averaged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates. Despite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports.
Correia, Andrew W; Peters, Junenette L; Levy, Jonathan I; Melly, Steven
2013-01-01
Objective To investigate whether exposure to aircraft noise increases the risk of hospitalization for cardiovascular diseases in older people (≥65 years) residing near airports. Design Multi-airport retrospective study of approximately 6 million older people residing near airports in the United States. We superimposed contours of aircraft noise levels (in decibels, dB) for 89 airports for 2009 provided by the US Federal Aviation Administration on census block resolution population data to construct two exposure metrics applicable to zip code resolution health insurance data: population weighted noise within each zip code, and 90th centile of noise among populated census blocks within each zip code. Setting 2218 zip codes surrounding 89 airports in the contiguous states. Participants 6 027 363 people eligible to participate in the national medical insurance (Medicare) program (aged ≥65 years) residing near airports in 2009. Main outcome measures Percentage increase in the hospitalization admission rate for cardiovascular disease associated with a 10 dB increase in aircraft noise, for each airport and on average across airports adjusted by individual level characteristics (age, sex, race), zip code level socioeconomic status and demographics, zip code level air pollution (fine particulate matter and ozone), and roadway density. Results Averaged across all airports and using the 90th centile noise exposure metric, a zip code with 10 dB higher noise exposure had a 3.5% higher (95% confidence interval 0.2% to 7.0%) cardiovascular hospital admission rate, after controlling for covariates. Conclusions Despite limitations related to potential misclassification of exposure, we found a statistically significant association between exposure to aircraft noise and risk of hospitalization for cardiovascular diseases among older people living near airports. PMID:24103538
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
Sachan, Prachee; Kumar, Nidhi; Sharma, Jagdish Prasad
2014-01-01
Background: Density of the drugs injected intrathecally is an important factor that influences spread in the cerebrospinal fluid. Mixing adjuvants with local anesthetics (LA) alters their density and hence their spread compared to when given sequentially in seperate syringes. Aims: To evaluate the efficacy of intrathecal administration of hyperbaric bupivacaine (HB) and clonidine as a mixture and sequentially in terms of block characteristics, hemodynamics, neonatal outcome, and postoperative pain. Setting and Design: Prospective randomized single blind study at a tertiary center from 2010 to 2012. Materials and Methods: Ninety full-term parturient scheduled for elective cesarean sections were divided into three groups on the basis of technique of intrathecal drug administration. Group M received mixture of 75 μg clonidine and 10 mg HB 0.5%. Group A received 75 μg clonidine after administration of 10 mg HB 0.5% through separate syringe. Group B received 75 μg clonidine before HB 0.5% (10 mg) through separate syringe. Statistical analysis used: Observational descriptive statistics, analysis of variance with Bonferroni multiple comparison post hoc test, and Chi-square test. Results: Time to achieve complete sensory and motor block was less in group A and B in which drugs were given sequentially. Duration of analgesia lasted longer in group B (474.3 ± 20.79 min) and group A (472.50 ± 22.11 min) than in group M (337 ± 18.22 min) with clinically insignificant influence on hemodynamic parameters and sedation. Conclusion: Sequential technique reduces time to achieve complete sensory and motor block, delays block regression, and significantly prolongs the duration of analgesia. However, it did not matter much whether clonidine was administered before or after HB. PMID:25886098
Zhao, Hongbo; Chen, Yuying; Feng, Wenquan; Zhuang, Chen
2018-05-25
Inter-satellite links are an important component of the new generation of satellite navigation systems, characterized by low signal-to-noise ratio (SNR), complex electromagnetic interference and the short time slot of each satellite, which brings difficulties to the acquisition stage. The inter-satellite link in both Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) adopt the long code spread spectrum system. However, long code acquisition is a difficult and time-consuming task due to the long code period. Traditional folding methods such as extended replica folding acquisition search technique (XFAST) and direct average are largely restricted because of code Doppler and additional SNR loss caused by replica folding. The dual folding method (DF-XFAST) and dual-channel method have been proposed to achieve long code acquisition in low SNR and high dynamic situations, respectively, but the former is easily affected by code Doppler and the latter is not fast enough. Considering the environment of inter-satellite links and the problems of existing algorithms, this paper proposes a new long code acquisition algorithm named dual-channel acquisition method based on the extended replica folding algorithm (DC-XFAST). This method employs dual channels for verification. Each channel contains an incoming signal block. Local code samples are folded and zero-padded to the length of the incoming signal block. After a circular FFT operation, the correlation results contain two peaks of the same magnitude and specified relative position. The detection process is eased through finding the two largest values. The verification takes all the full and partial peaks into account. Numerical results reveal that the DC-XFAST method can improve acquisition performance while acquisition speed is guaranteed. The method has a significantly higher acquisition probability than folding methods XFAST and DF-XFAST. Moreover, with the advantage of higher detection probability and lower false alarm probability, it has a lower mean acquisition time than traditional XFAST, DF-XFAST and zero-padding.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1991-01-01
Shannon's capacity bound shows that coding can achieve large reductions in the required signal to noise ratio per information bit (E sub b/N sub 0 where E sub b is the energy per bit and (N sub 0)/2 is the double sided noise density) in comparison to uncoded schemes. For bandwidth efficiencies of 2 bit/sym or greater, these improvements were obtained through the use of Trellis Coded Modulation and Block Coded Modulation. A method of obtaining these high efficiencies using multidimensional Multiple Phase Shift Keying (MPSK) and Quadrature Amplitude Modulation (QAM) signal sets with trellis coding is described. These schemes have advantages in decoding speed, phase transparency, and coding gain in comparison to other trellis coding schemes. Finally, a general parity check equation for rotationally invariant trellis codes is introduced from which non-linear codes for two dimensional MPSK and QAM signal sets are found. These codes are fully transparent to all rotations of the signal set.
NASA Astrophysics Data System (ADS)
Morkel, Matthias; Rupprechter, Günther; Freund, Hans-Joachim
2003-11-01
Sum frequency generation (SFG) vibrational spectroscopy was carried out in conjunction with thermal desorption spectroscopy, low-energy electron diffraction, and Auger electron spectroscopy to examine the coadsorption of CO and H2 on Pd(111). Sequential dosing as well as various CO/H2 mixtures was utilized to study intermolecular interactions between CO and H2. Preadsorbed CO effectively prevented the dissociative adsorption of hydrogen for CO coverages ⩾0.33 ML. While preadsorbed hydrogen was able to hinder CO adsorption at low temperature (100 K), hydrogen was replaced from the surface by CO at 150 K. When 1:1 mixtures of CO/H2 were used at 100 K, hydrogen selectively hindered CO adsorption on on-top sites, while above ˜125 K no blocking of CO adsorption was observed. The observations are explained in terms of mutual site blocking, of a CO-H phase separation, and of a CO-assisted hydrogen dissolution in the Pd bulk. The temperature-dependent site blocking effect of hydrogen is attributed to the ability (inability) of surface hydrogen to diffuse into the Pd bulk above (below) ˜125 K. Nonlinear optical SFG spectroscopy allowed us to study these effects not only in ultrahigh vacuum but also in a high-pressure environment. Using an SFG-compatible ultrahigh vacuum-high-pressure cell, spectra of 1:10 CO/H2 mixtures were acquired up to 55 mbar and 550 K, with simultaneous gas chromatographic and mass spectrometric gas phase analysis. Under reaction conditions, CO coverages ⩾0.5 ML were observed which strongly limit H2 adsorption and thus may be partly responsible for the low CO hydrogenation rate. The high-pressure and high-temperature SFG spectra also showed indications of a reversible surface roughening or a highly dynamic (not perfectly ordered) CO adsorbate phase. Implications of the observed adsorbate structures on catalytic CO hydrogenation on supported Pd nanoparticles are discussed.
Du, Jin Peng; Fan, Yong; Liu, Ji Jun; Zhang, Jia Nan; Chang Liu, Shi; Hao, Dingjun
2017-12-01
Application of nerve root block is mainly for diagnosis with less application in intraoperative treatment. The aim of this study was to observe clinical and imaging outcomes of application of gelatin sponge impregnated with a mixture of 3 drugs to intraoperative nerve root block combined with robot-assisted minimally invasive transforaminal lumbar interbody fusion surgery in to treat adult degenerative lumbar scoliosis. From January 2012 to November 2014, 108 patients with adult degenerative lumbar scoliosis were treated with robot-assisted minimally invasive transforaminal lumbar interbody fusion surgery combined with intraoperative gelatin sponge impregnated with a mixture of 3 drugs. Visual analog scale and Oswestry Disability Index scores were used to evaluate postoperative improvement of back and leg pain, and clinical effects were assessed according to the 36-Item Short-Form Health Survey. Imaging was obtained preoperatively, 1 week and 3 months postoperatively, and at the last follow-up. Fusion status, complications, and other outcomes were assessed. Follow-up was complete for 96 patients. Visual analog scale scores of leg and back pain on postoperative days 1-7 were decreased compared with preoperatively. At 1 week postoperatively, 3 months postoperatively, and last follow-up, visual analog scale score, Oswestry Disability Index score, coronal Cobb angle, and coronal and sagittal deviated distance decreased significantly (P = 0.000) and lumbar lordosis angle increased (P = 0.000) compared with preoperatively. Improvement rate of Oswestry Disability Index was 81.8% ± 7.4. Fusion rate between vertebral bodies was 92.7%. Application of gelatin sponge impregnated with 3 drugs combined with robot-assisted minimally invasive transforaminal lumbar interbody fusion for treatment of adult degenerative lumbar scoliosis is safe and feasible with advantages of good short-term analgesia effect, minimal invasiveness, short length of stay, and good long-term clinical outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhou, Ming; Chang, Shoude; Grover, Chander
2004-06-28
Further to the optical coding based on fluorescent semiconductor quantum dots (QDs), a concept of using mixtures of multiple single-color QDs for creating highly secret cryptograms based on their absorption/emission properties was demonstrated. The key to readout of the optical codes is a group of excitation lights with the predetermined wavelengths programmed in a secret manner. The cryptograms can be printed on the surfaces of different objects such as valuable documents for security purposes.
Electron transport in solid targets and in the active mixture of a CO2 laser amplifier
NASA Astrophysics Data System (ADS)
Galkowski, A.
The paper examines the use of the NIKE code for the Monte Carlo computation of the deposited energy profile and other characteristics of the absorption process of an electron beam in a solid target and the spatial distribution of primary ionization in the active mixture of a CO2 laser amplifier. The problem is considered in connection with the generation of intense electron beams and the acceleration of thin metal foils, as well as in connection with the electric discharge pumping of a CO2 laser amplifier.
The Urtica dioica Agglutinin Is a Complex Mixture of Isolectins 1
Van Damme, Els J. M.; Broekaert, Willem F.; Peumans, Willy J.
1988-01-01
Rhizomes of stinging nettle (Urtica dioica) contain a complex mixture of isolectins. Ion exchange chromatography with a high resolution fast protein liquid chromatography system revealed six isoforms which exhibit identical agglutination properties and carbohydrate-binding specificity and in addition have the same molecular structure and virtually identical biochemical properties. However, since the U. dioica agglutinin isolectins differ definitely with respect to their amino acid composition, it is likely that at least some of them are different polypeptides coded for by different genes. Images Fig. 3 PMID:16665952
Finite-block-length analysis in classical and quantum information theory.
Hayashi, Masahito
2017-01-01
Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects.
Finite-block-length analysis in classical and quantum information theory
HAYASHI, Masahito
2017-01-01
Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects. PMID:28302962
Optical LDPC decoders for beyond 100 Gbits/s optical transmission.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2009-05-01
We present an optical low-density parity-check (LDPC) decoder suitable for implementation above 100 Gbits/s, which provides large coding gains when based on large-girth LDPC codes. We show that a basic building block, the probabilities multiplier circuit, can be implemented using a Mach-Zehnder interferometer, and we propose corresponding probabilistic-domain sum-product algorithm (SPA). We perform simulations of a fully parallel implementation employing girth-10 LDPC codes and proposed SPA. The girth-10 LDPC(24015,19212) code of the rate of 0.8 outperforms the BCH(128,113)xBCH(256,239) turbo-product code of the rate of 0.82 by 0.91 dB (for binary phase-shift keying at 100 Gbits/s and a bit error rate of 10(-9)), and provides a net effective coding gain of 10.09 dB.
The proposed coding standard at GSFC
NASA Technical Reports Server (NTRS)
Morakis, J. C.; Helgert, H. J.
1977-01-01
As part of the continuing effort to introduce standardization of spacecraft and ground equipment in satellite systems, NASA's Goddard Space Flight Center and other NASA facilities have supported the development of a set of standards for the use of error control coding in telemetry subsystems. These standards are intended to ensure compatibility between spacecraft and ground encoding equipment, while allowing sufficient flexibility to meet all anticipated mission requirements. The standards which have been developed to date cover the application of block codes in error detection and error correction modes, as well as short and long constraint length convolutional codes decoded via the Viterbi and sequential decoding algorithms, respectively. Included are detailed specifications of the codes, and their implementation. Current effort is directed toward the development of standards covering channels with burst noise characteristics, channels with feedback, and code concatenation.
Parallel deterministic neutronics with AMR in 3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C.; Ferguson, J.; Hendrickson, C.
1997-12-31
AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.
The Antimicrobial Effects of Various Nutrient Electrolyte Beverages
1986-05-01
SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if n ecessary and identify by block number) FIELD GROUP SUB-GROUP...reverse if necessary and identify by block number) The purpose of this study was to determine if Staphylococcus aureus, Saccharomy ces cerevisiae...sodium benzoate, and malta- dextrin ); inoculated with A. flavus was incubat ed for various time periods at 30°C~ b Cell volumes {mL) were obtained as
Software Library: A Reusable Software Issue.
1984-06-01
On reverse aide it neceeary aid Identify by block number) Software Library; Program Library; Reusability; Generator 20 ABSTRACT (Cmlnue on revere... Software Library. A particular example of the Software Library, the Program Library, is described as a prototype of a reusable library. A hierarchical... programming libraries are described. Finally, non code products in the Software Library are discussed. Accesson Fo NTIS R~jS DrrC TA Availability Codes 0
NASA Technical Reports Server (NTRS)
Ryer, M. J.
1978-01-01
HAL/S is a computer programming language; it is a representation for algorithms which can be interpreted by either a person or a computer. HAL/S compilers transform blocks of HAL/S code into machine language which can then be directly executed by a computer. When the machine language is executed, the algorithm specified by the HAL/S code (source) is performed. This document describes how to read and write HAL/S source.
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825
NASA Astrophysics Data System (ADS)
Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.
2010-11-01
We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.
Novel modes and adaptive block scanning order for intra prediction in AV1
NASA Astrophysics Data System (ADS)
Hadar, Ofer; Shleifer, Ariel; Mukherjee, Debargha; Joshi, Urvang; Mazar, Itai; Yuzvinsky, Michael; Tavor, Nitzan; Itzhak, Nati; Birman, Raz
2017-09-01
The demand for streaming video content is on the rise and growing exponentially. Networks bandwidth is very costly and therefore there is a constant effort to improve video compression rates and enable the sending of reduced data volumes while retaining quality of experience (QoE). One basic feature that utilizes the spatial correlation of pixels for video compression is Intra-Prediction, which determines the codec's compression efficiency. Intra prediction enables significant reduction of the Intra-Frame (I frame) size and, therefore, contributes to efficient exploitation of bandwidth. In this presentation, we propose new Intra-Prediction algorithms that improve the AV1 prediction model and provide better compression ratios. Two (2) types of methods are considered: )1( New scanning order method that maximizes spatial correlation in order to reduce prediction error; and )2( New Intra-Prediction modes implementation in AVI. Modern video coding standards, including AVI codec, utilize fixed scan orders in processing blocks during intra coding. The fixed scan orders typically result in residual blocks with high prediction error mainly in blocks with edges. This means that the fixed scan orders cannot fully exploit the content-adaptive spatial correlations between adjacent blocks, thus the bitrate after compression tends to be large. To reduce the bitrate induced by inaccurate intra prediction, the proposed approach adaptively chooses the scanning order of blocks according to criteria of firstly predicting blocks with maximum number of surrounding, already Inter-Predicted blocks. Using the modified scanning order method and the new modes has reduced the MSE by up to five (5) times when compared to conventional TM mode / Raster scan and up to two (2) times when compared to conventional CALIC mode / Raster scan, depending on the image characteristics (which determines the percentage of blocks predicted with Inter-Prediction, which in turn impacts the efficiency of the new scanning method). For the same cases, the PSNR was shown to improve by up to 7.4dB and up to 4 dB, respectively. The new modes have yielded 5% improvement in BD-Rate over traditionally used modes, when run on K-Frame, which is expected to yield 1% of overall improvement.
Simulations of Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul
2015-11-01
Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.
NASA Astrophysics Data System (ADS)
Yoon, Jong Moon; Shin, Dong Ok; Yin, You; Seo, Hyeon Kook; Kim, Daewoon; In Kim, Yong; Jin, Jung Ho; Kim, Yong Tae; Bae, Byeong-Soo; Ouk Kim, Sang; Lee, Jeong Yong
2012-06-01
Mushroom-shaped phase change memory (PCM) consisting of a Cr/In3Sb1Te2 (IST)/TiN (bottom electrode) nanoarray was fabricated via block copolymer lithography and single-step dry etching with a gas mixture of Ar/Cl2. The process was performed on a high performance transparent glass-fabric reinforced composite film (GFR Hybrimer) suitable for use as a novel substrate for flexible devices. The use of GFR Hybrimer with low thermal expansion and flat surfaces enabled successful nanoscale patterning of functional phase change materials on flexible substrates. Block copolymer lithography employing asymmetrical block copolymer blends with hexagonal cylindrical self-assembled morphologies resulted in the creation of hexagonal nanoscale PCM cell arrays with an areal density of approximately 176 Gb/in2.
Analysis of Flow Migration in an Ultra-Compact Combustor
2011-03-01
Computational Fluid Dynamics . . . . . . . . . . . . . . . 6 UNICORN Unsteady Ignition and Combustion with Reactions . . . . 8 LBO Lean Blowout...the magnitude of enhanced flame speeds due to g- loading using the UNICORN CFD code. The study examined flame propagation for a hydrogen-air mixture in
Reuse of steel slag in bituminous paving mixtures.
Sorlini, Sabrina; Sanzeni, Alex; Rondi, Luca
2012-03-30
This paper presents a comprehensive study to evaluate the mechanical properties and environmental suitability of electric arc furnace (EAF) steel slag in bituminous paving mixtures. A variety of tests were executed on samples of EAF slag to characterize the physical, geometrical, mechanical and chemical properties as required by UNI EN specifications, focusing additionally on the volumetric expansion associated with hydration of free CaO and MgO. Five bituminous mixtures of aggregates for flexible road pavement were designed containing up to 40% of EAF slag and were tested to determine Marshall stability and indirect tensile strength. The leaching behaviour of slag samples and bituminous mixtures was evaluated according to the UNI EN leaching test. The tested slag showed satisfactory physical and mechanical properties and a release of pollutants generally below the limits set by the Italian code. Tests on volume stability of fresh materials confirmed that a period of 2-3 months is necessary to reduce effects of oxides hydration. The results of tests performed on bituminous mixtures with EAF slag were comparable with the performance of mixtures containing natural aggregates and the leaching tests provided satisfactory results. Copyright © 2012 Elsevier B.V. All rights reserved.
Luyet, Cédric; Eng, Kenneth T; Kertes, Peter J; Avila, Arsenio; Muni, Rajeev H; McHardy, Paul
2012-01-01
The aims of this prospective observational study were to assess the incidence of intraconal spread during peribulbar (extraconal) anesthesia by real-time ultrasound imaging of the retro-orbital compartment and to determine whether a complete sensory and motor block (with akinesia) of the eye is directly related to the intraconal spread. Ultrasound imaging was performed in 100 patients who underwent a surgical procedure on the posterior segment of the eye. All patients received a peribulbar block using the inferolateral approach. Once the needle was in place, a linear ultrasound transducer was placed over the eyelid and the spread of local anesthetics was assessed during the injection (real time). Akinesia was assessed by a blinded observer 10 minutes after block placement. The incidence of intraconal spread and its correlation with a complete akinesia was measured. The overall block failure rate was 28% in terms of akinesia, and the rate of rescue blocks was 20%. Clear intraconal spread during injection of the local anesthetic mixture could be detected with ultrasound imaging in 61 of 100 patients. The positive predictive value for successful block when intraconal spread was detected was 98% (95% confidence interval, 91%-100%). The association between clear and no evidence of intraconal spread and effective block was statistically significant (χ test, P < 0.001). Ultrasound imaging provides information of local anesthetic spread within the retro-orbital space, which might assist in the prediction of block success.
NASA Astrophysics Data System (ADS)
Loh, C. W.
1980-03-01
A method was developed for determining equilibrium constants, heat of reaction, and change in free energy and entropy during a 1:1 complex formation in solutions. The measurements were carried out on ternary systems containing two interacting solutes in an inert solvent. The procedures was applied to the investigation of hydrogen bond complex formations in two mixtures systems, phenol and pyridine in carbon tetrachloride, and 4, 5, 6, 7-tetrachloro-2-trifluoromethylbenzimidazole (TTFB) and alkyl acetate in styrene. The first mixture system was studied in order to compare the results with those obtained by other methods. Results for the second mixture system indicated strong association between molecules of TTFB and alkyl acetate and suggested that the blocking of valinomycin-mediated bilayer membrane conductance by substituted benzimidazoles was due to competition for a limited number of adsorption sites on the membrane surface.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Development of V/STOL methodology based on a higher order panel method
NASA Technical Reports Server (NTRS)
Bhateley, I. C.; Howell, G. A.; Mann, H. W.
1983-01-01
The development of a computational technique to predict the complex flowfields of V/STOL aircraft was initiated in which a number of modules and a potential flow aerodynamic code were combined in a comprehensive computer program. The modules were developed in a building-block approach to assist the user in preparing the geometric input and to compute parameters needed to simulate certain flow phenomena that cannot be handled directly within a potential flow code. The PAN AIR aerodynamic code, which is higher order panel method, forms the nucleus of this program. PAN AIR's extensive capability for allowing generalized boundary conditions allows the modules to interact with the aerodynamic code through the input and output files, thereby requiring no changes to the basic code and easy replacement of updated modules.
A General Sparse Tensor Framework for Electronic Structure Theory
Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...
2017-01-24
Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less
Representation of deformable motion for compression of dynamic cardiac image data
NASA Astrophysics Data System (ADS)
Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André
2012-02-01
We present a new approach for efficient estimation and storage of tissue deformation in dynamic medical image data like 3-D+t computed tomography reconstructions of human heart acquisitions. Tissue deformation between two points in time can be described by means of a displacement vector field indicating for each voxel of a slice, from which position in the previous slice at a fixed position in the third dimension it has moved to this position. Our deformation model represents the motion in a compact manner using a down-sampled potential function of the displacement vector field. This function is obtained by a Gauss-Newton minimization of the estimation error image, i. e., the difference between the current and the deformed previous slice. For lossless or lossy compression of volume slices, the potential function and the error image can afterwards be coded separately. By assuming deformations instead of translational motion, a subsequent coding algorithm using this method will achieve better compression ratios for medical volume data than with conventional block-based motion compensation known from video coding. Due to the smooth prediction without block artifacts, particularly whole-image transforms like wavelet decomposition as well as intra-slice prediction methods can benefit from this approach. We show that with discrete cosine as well as with Karhunen-Lo`eve transform the method can achieve a better energy compaction of the error image than block-based motion compensation while reaching approximately the same prediction error energy.
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
Imani, Farnad; Hemati, Karim; Rahimzadeh, Poupak; Kazemi, Mohamad Reza; Hejazian, Kokab
2016-01-01
Stellate Ganglion Block (SGB) is an effective technique which may be used to manage upper extremities pain due to Chronic Regional Pain Syndrome (CRPS), in this study we tried to evaluate the effectiveness of this procedure under two different guidance for management of this syndrome. The purpose of this study was to evaluate the effectiveness of ultrsound guide SGB by comparing it with the furoscopy guided SGB in upper extermities CRPS patients in reducing pain & dysfuction of the affected link. Fourteen patients with sympathetic CRPS in upper extremities in a randomized method with block randomization divided in two equal groups (with ultrasound or fluoroscopic guidance). First group was blocked under fluoroscopic guidance and second group blocked under ultrasound guidance. After correct positioning of the needle, a mixture of 5 ml bupivacaine 0.25% and 1 mL of triamcinolone was injected. These data represent no meaningful statistical difference between the two groups in terms of the number of pain attacks before the blocks, a borderline correlation between two groups one week and one month after the block and a significant statistical correlation between two groups three month after the block. These data represent no meaningful statistical difference between the patients of any group in terms of the pain intensity (from one week to six months after block), p-value = 0.61. These data represent a meaningful statistical difference among patients of any group and between the two groups in terms of the pain intensity (before the block until six months after block), p-values were 0.001, 0.031 respectively. According the above mentioned data, in comparison with fluoroscopic guidance, stellate ganglion block under ultrasound guidance is a safe and effective method with lower complication and better improvement in patient's disability indexes.
Wang, Rong; Tang, Ping; Qiu, Feng; Yang, Yuliang
2005-09-15
The complex microstructures of amphiphilic ABC linear triblock copolymers in which one of the end blocks is relatively short and hydrophilic, and the other two blocks B and C are hydrophobic in a dilute solution, have been investigated by the real-space implementation of self-consistent field theory (SCFT) in two dimensions (2D). In contrast to diblock copolymers in solution, the aggregation of triblock copolymers are more complicated due to the presence of the second hydrophobic blocks and, hence, big ranges of parameter space controlling the morphology. By tailoring the hydrophobic degree and its difference between the blocks B and C, the various shapes of vesicles, circlelike and linelike micelles possibly corresponding to spherelike, and rodlike micelles in 3D, and especially, peanutlike micelles not found in diblock copolymers are observed. The transition from vesicles to circlelike micelles occurs with increasing the hydrophobicity of the blocks B and C, while the transition from circlelike micelles to linelike micelles or from the mixture of micelles and vesicles to the long linelike micelles takes place when the repulsive interaction of the end hydrophobic block C is stronger than that of the middle hydrophobic block B. Furthermore, it is favorable for dispersion of the block copolymer in the solvent into aggregates when the repulsion of the solvent to the end hydrophobic block is larger than that of the solvent to the middle hydrophobic block. Especially when the bulk block copolymers are in a weak segregation regime, the competition between the microphase separation and macrophase separation exists and the large compound micelle-like aggregates are found due to the macrophase separation with increasing the hydrophobic degree of blocks B and C, which is absent in diblock copolymer solution. The simulation results successfully reproduce the existing experimental ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CURRY, MATTHEW LEON; WARD, H. LEE; & SKJELLUM, ANTHONY
Gibraltar is a library and associated test suite which performs Reed-Solomon coding and decoding of data buffers using graphics processing units which support NVIDIA's CUDA technology. This library is used to generate redundant data allowing for recovery of lost information. For example, a user can generate m new blocks of data from n original blocks, distributing those pieces over n+m devices. If any m devices fail, the contents of those devices can be recovered from the contents of the other n devices, even if some of the original blocks are lost. This is a generalized description of RAID, a techniquemore » for increasing data storage speed and size.« less
Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code
NASA Technical Reports Server (NTRS)
Weinberg, B. C.; Mcdonald, H.
1980-01-01
There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.
Dalmay, Tamas
2018-01-01
RNA interference (RNAi) is a complex and highly conserved regulatory mechanism mediated via small RNAs (sRNAs). Recent technical advances in high throughput sequencing have enabled an increasingly detailed analysis of sRNA abundances and profiles in specific body parts and tissues. This enables investigations of the localized roles of microRNAs (miRNAs) and small interfering RNAs (siRNAs). However, variation in the proportions of non-coding RNAs in the samples being compared can hinder these analyses. Specific tissues may vary significantly in the proportions of fragments of longer non-coding RNAs (such as ribosomal RNA or transfer RNA) present, potentially reflecting tissue-specific differences in biological functions. For example, in Drosophila, some tissues contain a highly abundant 30nt rRNA fragment (the 2S rRNA) as well as abundant 5’ and 3’ terminal rRNA fragments. These can pose difficulties for the construction of sRNA libraries as they can swamp the sequencing space and obscure sRNA abundances. Here we addressed this problem and present a modified “rRNA blocking” protocol for the construction of high-definition (HD) adapter sRNA libraries, in D. melanogaster reproductive tissues. The results showed that 2S rRNAs targeted by blocking oligos were reduced from >80% to < 0.01% total reads. In addition, the use of multiple rRNA blocking oligos to bind the most abundant rRNA fragments allowed us to reveal the underlying sRNA populations at increased resolution. Side-by-side comparisons of sequencing libraries of blocked and non-blocked samples revealed that rRNA blocking did not change the miRNA populations present, but instead enhanced their abundances. We suggest that this rRNA blocking procedure offers the potential to improve the in-depth analysis of differentially expressed sRNAs within and across different tissues. PMID:29474379
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
1993-07-01
July 316 16. SUPPLEMENTARY NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) Bayou Plantation...Archeology Tenements Yatton Plantation 19. ABSTRACT (Continue on reverse if necessary and identify by block number) This report presents the results of Phase 1... Revetment Projects, M-270.2 to 246.0-R ........................................................... 21 Previously Recorded Archeological Sites in the
Role of Interfaces and Interphases in the Evolution Mechanics of Material Systems
1992-03-26
K. REIFSNIDER, W. STINCHCOMB, D. DILLARD, R. SWAIN, K. JAYARAMAN, Y. CHlANG J. LESKO, M. ELAHI, Z. GAO, A. RAZVAN Nlatcrials Response Group 92- 12953...1/91 1 26 March 1992 SUPPLEMENTARY NOTATION COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD GROUP ...SUB- GROUP ABSTRACT (Continue on reverse if necessary and identify by block number) This final report summarizes the activities conducted under this
Design and Implementation of a CMOS Chip for a Prolog
1988-03-01
generation scheme . We use the P -circuit [9] with pre-conditioning and post- conditioning 12,3] circuits to generate the carry. The implementation of...system generates vertical microcode for a general purpose processor, the NCR 9300 sys- S tem, from W- code [7]. Three significant pieces of software are...calculation block generating the pro- pagate ( P ) and generate (G) signals needed for carry calculation, and a sum block supplying the final result. The top
Genes involved in androgen biosynthesis and the male phenotype.
Waterman, M R; Keeney, D S
1992-01-01
A series of enzymatic steps in the testis lead to the conversion of cholesterol to the male sex steroid hormones, testosterone and 5 alpha-dihydrotestosterone. Mutations in any one of these steps are presumed to alter or block the development of the male phenotype. Most of the genes encoding the enzymes involved in this pathway have now been cloned, and mutations within the coding regions of these genes do, in fact, block development of the male phenotype.
1982-11-01
D- R136 495 RETURN DIFFERENCE FEEDBACK DESIGN FOR ROBUSTj/ UNCERTAINTY TOLERANCE IN STO..(U) UNIVERSITY OF SOUTHERN CALIFORNIA LOS ANGELES DEPT OF...State and ZIP Code) 7. b6 ADORESS (City. Staft and ZIP Code) Department of Electrical Engineering -’M Directorate of Mathematical & Information Systems ...13. SUBJECT TERMS Continur on rverse ineeesaty and identify by block nmber) FIELD GROUP SUE. GR. Systems theory; control; feedback; automatic control
COM-GEOM Interactive Display Debugger (CIDD)
1984-08-01
necessery and Identify by block nlum.ber) Target Description GIFT interactive Computer Graphics SolIi d Geone t ry Combintatorial Gecometry * COM-GLOM 120...program was written to speed up the process of formulating the Com-Geom data used by the Geometric Information for Targets ( GIFT ) 1,2 computer code...Polyhedron Lawrence W. Bain, Mathew J. Reisinger, "The GIFT Code User Manual; Volume I, Introduction and Input Requirements (u)," BRL Report No. 1802
1989-07-31
40. NO NO ACCESSION NO N7 ?I TITLE (inWijuod Security Claisification) NTRFACE FOR MAGIC PERSONAL AUTHOR(S) N.T. GLADD PE OF REPORT T b TIME...the MAGIC Particle-in-Cell Simulation Code. 19 ABSTRACT (Contianue on reverse if nceary and d ntiy by block number) The NTRFACE system was developed...made concret by applying it to a specific application- a mature, highly complex plasma physics particle in cell simulation code name MAGIC . This
Rapid Trust Establishment for Transient Use of Unmanaged Hardware
2006-12-01
unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: Establishing...Validate OS Trusted Host OS (From Disk) Validate App 1 Untrusted code Trusted code (a) Boot with trust initiator ( b ) Boot trusted Host OS (c) Launch...be validated. Execution of process with Id 3535 has been blocked to minimize security risks. ( b ) Notification to the user from the trust alerter
ARES: A System for Real-Time Operational and Tactical Decision Support
1986-12-01
In B]LE LCLGf. 9 NAVAL POSTGRADUATE SCHOOL Monterey, California Vi,-. %*.. THESIS - ’ A RE S A SYSTEM -OR REAL- 1I I .-.. --- OPERATIONAL AND...able) aval Postgraduate School 54 Naval Postgraduate School NN DRESS (City,. State,. and ZIP Code) 7b ADDRESS (City,. State,. and ZIP Code...SUBJECT TERMS (Continue on reverse if necessaty and identify by block number) LD GROUP SUB-GROUP Decision Support System, Logistics Model, Operational
The Role of the National Training Center during Full Mobilization
1991-06-07
resources are proposed by this study. 14. SUBJECT TERMS 15. NUMBER OF PAGES 217 National Training Center (NTC); Training; Mobilization; Combat 16. PRICE ... Price Code, Enter appropriate price Block 8. Performina Oraanization Report code (NTIS only). Number, Enter the unique alphanumeric report number(s...Regular Army and a transfer of their roles to the Reserve Component. The end of the Cold War makes future mobilization needs less likely and argues for
Kaplowitz, Stan A; Perlstadt, Harry; D'Onofrio, Gail; Melnick, Edward R; Baum, Carl R; Kirrane, Barbara M; Post, Lori A
2012-01-01
We derived a clinical decision rule for determining which young children need testing for lead poisoning. We developed an equation that combines lead exposure self-report questions with the child's census-block housing and socioeconomic characteristics, personal demographic characteristics, and Medicaid status. This equation better predicts elevated blood lead level (EBLL) than one using ZIP code and Medicaid status. A survey regarding potential lead exposure was administered from October 2001 to January 2003 to Michigan parents at pediatric clinics (n=3,396). These self-report survey data were linked to a statewide clinical registry of blood lead level (BLL) tests. Sensitivity and specificity were calculated and then used to estimate the cost-effectiveness of the equation. The census-block group prediction equation explained 18.1% of the variance in BLLs. Replacing block group characteristics with the self-report questions and dichotomized ZIP code risk explained only 12.6% of the variance. Adding three self-report questions to the census-block group model increased the variance explained to 19.9% and increased specificity with no loss in sensitivity in detecting EBLLs of ≥ 10 micrograms per deciliter. Relying solely on self-reports of lead exposure predicted BLL less effectively than the block group model. However, adding three of 13 self-report questions to our clinical decision rule significantly improved prediction of which children require a BLL test. Using the equation as the clinical decision rule would annually eliminate more than 7,200 unnecessary tests in Michigan and save more than $220,000.
Convection and chemistry effects in CVD: A 3-D analysis for silicon deposition
NASA Technical Reports Server (NTRS)
Gokoglu, S. A.; Kuczmarski, M. A.; Tsui, P.; Chait, A.
1989-01-01
The computational fluid dynamics code FLUENT has been adopted to simulate the entire rectangular-channel-like (3-D) geometry of an experimental CVD reactor designed for Si deposition. The code incorporated the effects of both homogeneous (gas phase) and heterogeneous (surface) chemistry with finite reaction rates of important species existing in silane dissociation. The experiments were designed to elucidate the effects of gravitationally-induced buoyancy-driven convection flows on the quality of the grown Si films. This goal is accomplished by contrasting the results obtained from a carrier gas mixture of H2/Ar with the ones obtained from the same molar mixture ratio of H2/He, without any accompanying change in the chemistry. Computationally, these cases are simulated in the terrestrial gravitational field and in the absence of gravity. The numerical results compare favorably with experiments. Powerful computational tools provide invaluable insights into the complex physicochemical phenomena taking place in CVD reactors. Such information is essential for the improved design and optimization of future CVD reactors.
Nonequilibrium radiation behind a strong shock wave in CO 2-N 2
NASA Astrophysics Data System (ADS)
Rond, C.; Boubert, P.; Félio, J.-M.; Chikhaoui, A.
2007-11-01
This work presents experiments reproducing plasma re-entry for one trajectory point of a Martian mission. The typical facility to investigate such hypersonic flow is shock tube; here we used the free-piston shock tube TCM2. Measurements of radiative flux behind the shock wave are realized thanks to time-resolved emission spectroscopy which is calibrated in intensity. As CN violet system is the main radiator in near UV-visible range, we have focused our study on its spectrum. Moreover a physical model, based on a multi-temperature kinetic code and a radiative code, for calculation of non equilibrium radiation behind a shock wave is developed for CO 2-N 2-Ar mixtures. Comparisons between experiments and calculations show that standard kinetic models (Park, McKenzie) are inefficient to reproduce our experimental results. Therefore we propose new rate coefficients in particular for the dissociation of CO 2, showing the way towards a better description of the chemistry of the mixture.
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
NASA Astrophysics Data System (ADS)
Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok
Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
2012-05-01
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
75 FR 22165 - Request for Certification of Compliance-Rural Industrialization Loan and Grant Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
...-fit an existing manufacturing facility to produce autoclaved aerated concrete (AAC) ``green'' building materials. The NAICS industry code for this enterprise is: 327331 Concrete Block and Brick Manufacturing...
Hettinger, Thomas P.; Savoy, Lawrence D.; Frank, Marion E.
2012-01-01
Component signaling in taste mixtures containing both beneficial and dangerous chemicals depends on peripheral processing. Unidirectional mixture suppression of chorda tympani (CT) nerve responses to sucrose by quinine and acid is documented for golden hamsters (Mesocricetus auratus). To investigate mixtures of NaCl and acids, we recorded multifiber responses to 50 mM NaCl, 1 and 3 mM citric acid and acetic acid, 250 μM citric acid, 20 mM acetic acid, and all binary combinations of each acid with NaCl (with and without 30 μM amiloride added). By blocking epithelial Na+ channels, amiloride treatment separated amiloride-sensitive NaCl-specific responses from amiloride-insensitive electrolyte-generalist responses, which encompass all of the CT response to the acids as well as responses to NaCl. Like CT sucrose responses, the amiloride-sensitive NaCl responses were suppressed by as much as 50% by citric acid (P = 0.001). The amiloride-insensitive electrolyte-generalist responses to NaCl + acid mixtures approximated the sum of NaCl and acid component responses. Thus, although NaCl-specific responses to NaCl were weakened in NaCl–acid mixtures, electrolyte-generalist responses to acid and NaCl, which tastes KCl-like, were transmitted undiminished in intensity to the central nervous system. The 2 distinct CT pathways are consistent with known rodent behavioral discriminations. PMID:22451526
Rainey, Nathan E; Saric, Ana; Leberre, Alexandre; Dewailly, Etienne; Slomianny, Christian; Vial, Guillaume; Zeliger, Harold I; Petit, Patrice X
2017-07-05
Humans are exposed to multiple exogenous environmental pollutants. Many of these compounds are parts of mixtures that can exacerbate harmful effects of the individual mixture components. 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), is primarily produced via industrial processes including incineration and the manufacture of herbicides. Both endosulfan and TCDD are persistent organic pollutants which elicit cytotoxic effects by inducing reactive oxygen species generation. Sublethal concentrations of mixtures of TCDD and endosulfan increase oxidative stress, as well as mitochondrial homeostasis disruption, which is preceded by a calcium rise and, in fine, induce cell death. TCDD+Endosulfan elicit a complex signaling sequence involving reticulum endoplasmic destalilization which leads to Ca 2+ rise, superoxide anion production, ATP drop and late NADP(H) depletion associated with a mitochondrial induced apoptosis concomitant early autophagic processes. The ROS scavenger, N-acetyl-cysteine, blocks both the mixture-induced autophagy and death. Calcium chelators act similarly and mitochondrially targeted anti-oxidants also abrogate these effects. Inhibition of the autophagic fluxes with 3-methyladenine, increases mixture-induced cell death. These findings show that subchronic doses of pollutants may act synergistically. They also reveal that the onset of autophagy might serve as a protective mechanism against ROS-triggered cytotoxic effects of a cocktail of pollutants in Caco-2 cells and increase their tumorigenicity.
High-voltage pulsed generator for dynamic fragmentation of rocks
NASA Astrophysics Data System (ADS)
Kovalchuk, B. M.; Kharlov, A. V.; Vizir, V. A.; Kumpyak, V. V.; Zorin, V. B.; Kiselev, V. N.
2010-10-01
A portable high-voltage (HV) pulsed generator has been designed for rock fragmentation experiments. The generator can be used also for other technological applications. The installation consists of low voltage block, HV block, coaxial transmission line, fragmentation chamber, and control system block. Low voltage block of the generator, consisting of a primary capacitor bank (300 μF) and a thyristor switch, stores pulse energy and transfers it to the HV block. The primary capacitor bank stores energy of 600 J at the maximum charging voltage of 2 kV. HV block includes HV pulsed step up transformer, HV capacitive storage, and two electrode gas switch. The following technical parameters of the generator were achieved: output voltage up to 300 kV, voltage rise time of ˜50 ns, current amplitude of ˜6 kA with the 40 Ω active load, and ˜20 kA in a rock fragmentation regime (with discharge in a rock-water mixture). Typical operation regime is a burst of 1000 pulses with a frequency of 10 Hz. The operation process can be controlled within a wide range of parameters. The entire installation (generator, transmission line, treatment chamber, and measuring probes) is designed like a continuous Faraday's cage (complete shielding) to exclude external electromagnetic perturbations.
High-voltage pulsed generator for dynamic fragmentation of rocks.
Kovalchuk, B M; Kharlov, A V; Vizir, V A; Kumpyak, V V; Zorin, V B; Kiselev, V N
2010-10-01
A portable high-voltage (HV) pulsed generator has been designed for rock fragmentation experiments. The generator can be used also for other technological applications. The installation consists of low voltage block, HV block, coaxial transmission line, fragmentation chamber, and control system block. Low voltage block of the generator, consisting of a primary capacitor bank (300 μF) and a thyristor switch, stores pulse energy and transfers it to the HV block. The primary capacitor bank stores energy of 600 J at the maximum charging voltage of 2 kV. HV block includes HV pulsed step up transformer, HV capacitive storage, and two electrode gas switch. The following technical parameters of the generator were achieved: output voltage up to 300 kV, voltage rise time of ∼50 ns, current amplitude of ∼6 kA with the 40 Ω active load, and ∼20 kA in a rock fragmentation regime (with discharge in a rock-water mixture). Typical operation regime is a burst of 1000 pulses with a frequency of 10 Hz. The operation process can be controlled within a wide range of parameters. The entire installation (generator, transmission line, treatment chamber, and measuring probes) is designed like a continuous Faraday's cage (complete shielding) to exclude external electromagnetic perturbations.
Matuszewska, Alicja; Uchman, Mariusz; Adamczyk-Woźniak, Agnieszka; Sporzyński, Andrzej; Pispas, Stergios; Kováčik, Lubomír; Štěpánek, Miroslav
2015-12-14
Coassembly behavior of the double hydrophilic block copolymer poly(4-hydroxystyrene)-block-poly(ethylene oxide) (PHOS-PEO) with three amphiphilic phenylboronic acids (PBA) differing in hydrophobicity, 4-dodecyloxyphenylboronic acid (C12), 4-octyloxyphenylboronic acid (C8), and 4-isobutoxyphenylboronic acid (i-Bu) was studied in alkaline aqueous solutions and in mixtures of NaOHaq/THF by spin-echo (1)H NMR spectroscopy, dynamic and electrophoretic light scattering, and SAXS. The study reveals that only the coassembly of C12 with PHOS-PEO provides spherical nanoparticles with intermixed PHOS and PEO blocks, containing densely packed C12 micelles. NMR measurements have shown that spatial proximity of PHOS-PEO and C12 leads to the formation of ester bonds between -OH of PHOS block and hydroxyl groups of -B(OH)2. Due to the presence of PBA moieties, the release of compounds with 1,2- or 1,3-dihydroxy groups loaded in the coassembled PHOS-PEO/PBA nanoparticles by covalent binding to PBA can be triggered by addition of a surplus of glucose that bind to PBA competitively. The latter feature has been confirmed by fluorescence measurements using Alizarin Red as a model compound. Nanoparticles were proved to exhibit swelling in response to glucose as detected by light scattering.
Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K
2017-12-04
Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.
Analysis of view synthesis prediction architectures in modern coding standards
NASA Astrophysics Data System (ADS)
Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang
2013-09-01
Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.
Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields
NASA Astrophysics Data System (ADS)
Motazedi, Mohammad Reza; Dianat, Reza
2017-02-01
In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.
Association of a multifunctional ionic block copolymer in a selective solvent
Etampawala, Thusitha N.; Aryal, Dipak; Osti, Naresh C.; ...
2016-11-14
The self-assembly of multiblock copolymers in solutions is controlled by a delicate balance between inherent phase segregation due to incompatibility of the blocks and the interaction of the individual blocks with the solvent. The current study elucidates the association of pentablock copolymers in a mixture of selective solvents which are good for the hydrophobic segments and poor for the hydrophilic blocks using small angle neutron scattering (SANS). The pentablock consists of a center block of randomly sulfonated polystyrene, designed for transport, tethered to poly-ethylene-r-propylene and end-capped by poly-t-butyl styrene, for mechanical stability. We find that the pentablock forms ellipsoidal core-shellmore » micelles with the sulfonated polystyrene in the core and Gaussian decaying chains of swollen poly-ethylene-r-propylene and poly-t-butyl styrene tertiary in the corona. With increasing solution concentration, the size of the micelle, the thickness of the corona, and the aggregation number increase, while the solvent fraction in the core decreases. As a result, in dilute solution the micelle increases in size as the temperature is increased, however, temperature effects dissipate with increasing solution concentration.« less
Synthesis and Structure of Fully Conjugated Block Copolymers Utilized in Organic Photovoltaics
NASA Astrophysics Data System (ADS)
Lee, Youngmin; Aplan, Melissa; Wang, Qing; Gomez, Enrique D.
2015-03-01
Fully conjugated block copolymers have the potential to overcome many of the limitations of mixtures and blends as photoactive layers in solar cells; furthermore, they may serve as model systems to study fundamental questions regarding optoelectric properties and charge transfer. However, the synthesis of fully conjugated block copolymers remains a challenging issue in the fieldchallenge. We have optimized the two-step synthesis of P3HT-b-PFTBT, which is composed comprised of Grignard metathesis for polymerization of P3HT followed by chain extension through a Suzuki-Miyaura polycondenstation. We find that the concentration of the Grignard reagent is critical for end-group control such that P3HT is terminated by H at one end and Br at the other. Furthermore, we can utilize an asymmetric feed ratio of monomers for the Suzuki-Miyaura reaction to minimize the amount of uncoupled homopolymers and to control the molecular weight of the second block. We investigated the chemical composition, structure and electrical characteristics of the polymers prepared by the different synthetic methods, and demonstrate that we can utilize these strategies for the synthesis of block copolymers beyond P3HT-b-PFTBT.
NASA Astrophysics Data System (ADS)
Kim, Seonguk; Min, Kyoungdoug
2008-08-01
The CAI (controlled auto ignition) engine ignites fuel and air mixture by trapping high temperature burnt gas using a negative valve overlap. Due to auto ignition in CAI combustion, efficiency improvements and low level NOx emission can be obtained. Meanwhile, the CAI combustion regime is restricted and control parameters are limited. The start of combustion data in the compressed ignition engine are most critical for controlling the overall combustion. In this research, the engine block vibration signal is transformed by the Meyer wavelet to analyze CAI combustion more easily and accurately. Signal acquisition of the engine block vibration is a more suitable method for practical use than measurement of in-cylinder pressure. A new method for detecting combustion start in CAI engines through wavelet transformation of the engine block vibration signal was developed and results indicate that it is accurate enough to analyze the start of combustion. Experimental results show that wavelet transformation of engine block vibration can track the start of combustion in each cycle. From this newly developed method, the start of combustion data in CAI engines can be detected more easily and used as input data for controlling CAI combustion.
Simulation of Unsteady Hypersonic Combustion Around Projectiles in an Expansion Tube
NASA Technical Reports Server (NTRS)
Yungster, S.; Radhakrishnan, K.
1999-01-01
The temporal evolution of combustion flowfields established by the interaction between wedge-shaped bodies and explosive hydrogen-oxygen-nitrogen mixtures accelerated to hypersonic speeds in an expansion tube is investigated. The analysis is carried out using a fully implicit, time-accurate, computational fluid dynamics code that we developed recently for solving the Navier-Stokes equations for a chemically reacting gas mixture. The numerical results are compared with experimental data from the Stanford University expansion tube for two different gas mixtures at Mach numbers of 4.2 and 5.2. The experimental work showed that flow unstart occurred for the Mach 4.2 cases. These results are reproduced by our numerical simulations and, more significantly, the causes for unstart are explained. For the Mach 5.2 mixtures, the experiments and numerical simulations both produced stable combustion. However, the computations indicate that in one case the experimental data were obtained during the transient phase of the flow; that is, before steady state had been attained.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Long non-coding RNA CRYBG3 blocks cytokinesis by directly binding G-actin.
Pei, Hailong; Hu, Wentao; Guo, Ziyang; Chen, Huaiyuan; Ma, Ji; Mao, Weidong; Li, Bingyan; Wang, Aiqing; Wan, Jianmei; Zhang, Jian; Nie, Jing; Zhou, Guangming; Hei, Tom K
2018-06-22
The dynamic interchange between monomeric globular actin (G-actin) and polymeric filamentous actin filaments (F-actin) is fundamental and essential to many cellular processes including cytokinesis and maintenance of genomic stability. Here we report that the long non-coding RNA LNC CRYBG3 directly binds G-actin to inhibit its polymerization and formation of contractile rings, resulting in M-Phase cell arrest. Knockdown of LNC CRYBG3 in tumor cells enhanced their malignant phenotypes. Nucleotide sequence 228-237 of the full-length LNC CRYBG3 and the ser14 domain of beta-actin are essential for their interaction, and mutation of either of these sites abrogated binding of LNC CRYBG3 to G-actin. Binding of LNC CRYBG3 to G-actin blocked nuclear localization of MAL, which consequently kept serum response factor (SRF) away from the promoter region of several immediate early genes, including JUNB and Arp3, which are necessary for cellular proliferation, tumor growth, adhesion, movement, and metastasis. These findings reveal a novel lncRNA-actin-MAL-SRF pathway and highlight LNC CRYBG3 as a means to block cytokinesis and treat cancer by targeting the actin cytoskeleton. Copyright ©2018, American Association for Cancer Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, C.; Hughes, E. D.; Niederauer, G. F.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the wallsmore » and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III contains some of the assessments performed by LANL and FzK« less
Comminution and sizing processes of concrete block waste as recycled aggregates.
Gomes, P C C; Ulsen, C; Pereira, F A; Quattrone, M; Angulo, S C
2015-11-01
Due to the environmental impact of construction and demolition waste (CDW), recycling is mandatory. It is also important that recycled concrete aggregates (RCA) are used in concrete to meet market demands. In the literature, the influence of RCAs on concrete has been investigated, but very limited studies have been conducted on how the origin of concrete waste and comminution processes influence RCA characteristics. This paper aims to investigate the influence of three different comminution and sizing processes (simple screening, crushing and grinding) on the composition, shape and porosity characteristics of RCA obtained from concrete block waste. Crushing and grinding implies a reduction of RCA porosity. However, due to the presence of coarse quartz rounded river pebbles in the original concrete block mixtures, the shape characteristics deteriorated. A large amount of powder (<0.15 mm) without detectable anhydrous cement was also generated. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atarashiya, Koji; Kurokawa, Kazuya; Nagai, Tadao
1992-08-01
Generally speaking, the preparation of FGM-blocks, especially in the metal-nitride systems, by the method of powder metallurgy needs an extremely high temperature and high pressure. But, in this work using a ductile nickel metal powder or an ultrafine particle, the FGM-blocks were easily prepared using a powder metallurgy at a lower temperature. A mixture of a metallic powder and a non-metallic powder whose contents were gradually changed was pressed in a steel die under pressure of 20-32 MPa. These green compacts were heated at 900-1573 K in controlled atmosphere under null pressure. The FGM-blocks prepared by this method were characterizedmore » by their properties and were used in joinings. The joinings of metal/FGM/ceramics, metal/FGM and ceramics/FGM were completely accomplished at 900-1573 K. 6 refs.« less
2015-01-01
Structure control in solution-processed hybrid perovskites is crucial to design and fabricate highly efficient solar cells. Here, we utilize in situ grazing incidence wide-angle X-ray scattering and scanning electron microscopy to investigate the structural evolution and film morphologies of methylammonium lead tri-iodide/chloride (CH3NH3PbI3–xClx) in mesoporous block copolymer derived alumina superstructures during thermal annealing. We show the CH3NH3PbI3–xClx material evolution to be characterized by three distinct structures: a crystalline precursor structure not described previously, a 3D perovskite structure, and a mixture of compounds resulting from degradation. Finally, we demonstrate how understanding the processing parameters provides the foundation needed for optimal perovskite film morphology and coverage, leading to enhanced block copolymer-directed perovskite solar cell performance. PMID:24684494
Block copolymer with simultaneous electric and ionic conduction for use in lithium ion batteries
Javier, Anna Esmeralda K; Balsara, Nitash Pervez; Patel, Shrayesh Naran; Hallinan, Jr., Daniel T
2013-10-08
Redox reactions that occur at the electrodes of batteries require transport of both ions and electrons to the active centers. Reported is the synthesis of a block copolymer that exhibits simultaneous electronic and ionic conduction. A combination of Grignard metathesis polymerization and click reaction was used successively to synthesize the block copolymer containing regioregular poly(3-hexylthiophene) (P3HT) and poly(ethylene oxide) (PEO) segments. The P3HT-PEO/LiTFSI mixture was then used to make a lithium battery cathode with LiFePO.sub.4 as the only other component. All-solid lithium batteries of the cathode described above, a solid electrolyte and a lithium foil as the anode showed capacities within experimental error of the theoretical capacity of the battery. The ability of P3HT-PEO to serve all of the transport and binding functions required in a lithium battery electrode is thus demonstrated.
NASA Technical Reports Server (NTRS)
Smith, Crawford F.; Podleski, Steve D.
1993-01-01
The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Sub-block motion derivation for merge mode in HEVC
NASA Astrophysics Data System (ADS)
Chien, Wei-Jung; Chen, Ying; Chen, Jianle; Zhang, Li; Karczewicz, Marta; Li, Xiang
2016-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. In this paper, two additional merge candidates, advanced temporal motion vector predictor and spatial-temporal motion vector predictor, are developed to improve motion information prediction scheme under the HEVC structure. The proposed method allows each Prediction Unit (PU) to fetch multiple sets of motion information from multiple blocks smaller than the current PU. By splitting a large PU into sub-PUs and filling motion information for all the sub-PUs of the large PU, signaling cost of motion information could be reduced. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. Simulation results show that 2.4% performance improvement over HEVC can be achieved.
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi
2017-07-01
In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).
Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures
Fung, J.; Aulwes, R. T.; Bement, M. T.; ...
2015-07-14
This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
Fast image interpolation for motion estimation using graphics hardware
NASA Astrophysics Data System (ADS)
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
Numerical Solution of the Three-Dimensional Navier-Stokes Equation.
1982-03-01
compressible, viscous fluid in an arbitrary geometry. We wish to use a grid generating scheme so we assume that the geometry of the physical problem given in...bian J of the mapping are provided. (For work on grid generating schemes see [4], [5] or [6).) Hence we must solve the following system of equations...these limitations the data structure used in the ILLIAC code is to partition the grid into 8 x 8 x 8 blocks. A row of these blocks in a given
Beer Drinking Games: Categories, Level of Risk, and their Correlation with Sensation Seeking
1994-07-01
Maximum 200 words) moo, AU 1F2 1994 . . F 14 . SUBJECT TERMS 15. NUMBER OF PAGES 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION ig...availability or limitations. Cite any Block 2. Report Date. Full publication date availability to the public. Enter additional including day, month, and year ...if available (e.g. 1 limitations or special markings in all capitals (e.g. Jan 88). Must cite at least the year . NOFORN, REL, ITAR). Block 3. Type of
Productivity of Populus in monoclonal and polyclonal blocks at three spacings.
Dean S. DeBell; Constance A. Harrington
1997-01-01
Four Populus clones were grown at three spacings (0.5, 1.0, and 1.5 m) in monoclonal plots and in polyclonal plots with all clones in intimate mixture. After the third year, many individual tree and stand traits differed significantly by clone, spacing, deployment method, and their interactions. Differences among clones in growth and stem form were...
Calculation of Transport Coefficients in Dense Plasma Mixtures
NASA Astrophysics Data System (ADS)
Haxhimali, T.; Cabot, W. H.; Caspersen, K. J.; Greenough, J.; Miller, P. L.; Rudd, R. E.; Schwegler, E. R.
2011-10-01
We use classical molecular dynamics (MD) to estimate species diffusivity and viscosity in mixed dense plasmas. The Yukawa potential is used to describe the screened Coulomb interaction between the ions. This potential has been used widely, providing the basis for models of dense stellar materials, inertial confined plasmas, and colloidal particles in electrolytes. We calculate transport coefficients in equilibrium simulations using the Green- Kubo relation over a range of thermodynamic conditions including the viscosity and the self - diffusivity for each component of the mixture. The interdiffusivity (or mutual diffusivity) can then be related to the self-diffusivities by using a generalization of the Darken equation. We have also employed non-equilibrium MD to estimate interdiffusivity during the broadening of the interface between two regions each with a high concentration of either species. Here we present results for an asymmetric mixture between Ar and H. These can easily be extended to other plasma mixtures. A main motivation for this study is to develop accurate transport models that can be incorporated into the hydrodynamic codes to study hydrodynamic instabilities. We use classical molecular dynamics (MD) to estimate species diffusivity and viscosity in mixed dense plasmas. The Yukawa potential is used to describe the screened Coulomb interaction between the ions. This potential has been used widely, providing the basis for models of dense stellar materials, inertial confined plasmas, and colloidal particles in electrolytes. We calculate transport coefficients in equilibrium simulations using the Green- Kubo relation over a range of thermodynamic conditions including the viscosity and the self - diffusivity for each component of the mixture. The interdiffusivity (or mutual diffusivity) can then be related to the self-diffusivities by using a generalization of the Darken equation. We have also employed non-equilibrium MD to estimate interdiffusivity during the broadening of the interface between two regions each with a high concentration of either species. Here we present results for an asymmetric mixture between Ar and H. These can easily be extended to other plasma mixtures. A main motivation for this study is to develop accurate transport models that can be incorporated into the hydrodynamic codes to study hydrodynamic instabilities. This work was performed under the auspices of the US Dept. of Energy by Lawrence Livermore National Security, LLC under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Wendler, Th.; Meyer-Ebrecht, D.
1982-01-01
Picture archiving and communication systems, especially those for medical applications, will offer the potential to integrate the various image sources of different nature. A major problem, however, is the incompatibility of the different matrix sizes and data formats. This may be overcome by a novel hierarchical coding process, which could lead to a unified picture format standard. A picture coding scheme is described, which decomposites a given (2n)2 picture matrix into a basic (2m)2 coarse information matrix (representing lower spatial frequencies) and a set of n-m detail matrices, containing information of increasing spatial resolution. Thus, the picture is described by an ordered set of data blocks rather than by a full resolution matrix of pixels. The blocks of data are transferred and stored using data formats, which have to be standardized throughout the system. Picture sources, which produce pictures of different resolution, will provide the coarse-matrix datablock and additionally only those detail matrices that correspond to their required resolution. Correspondingly, only those detail-matrix blocks need to be retrieved from the picture base, that are actually required for softcopy or hardcopy output. Thus, picture sources and retrieval terminals of diverse nature and retrieval processes for diverse purposes are easily made compatible. Furthermore this approach will yield an economic use of storage space and transmission capacity: In contrast to fixed formats, redundand data blocks are always skipped. The user will get a coarse representation even of a high-resolution picture almost instantaneously with gradually added details, and may abort transmission at any desired detail level. The coding scheme applies the S-transform, which is a simple add/substract algorithm basically derived from the Hadamard Transform. Thus, an additional data compression can easily be achieved especially for high-resolution pictures by applying appropriate non-linear and/or adaptive quantizing.
NASA Technical Reports Server (NTRS)
Shyam, Vikram
2010-01-01
A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
NASA Astrophysics Data System (ADS)
Yan, Beichuan; Regueiro, Richard A.
2018-02-01
A three-dimensional (3D) DEM code for simulating complex-shaped granular particles is parallelized using message-passing interface (MPI). The concepts of link-block, ghost/border layer, and migration layer are put forward for design of the parallel algorithm, and theoretical scalability function of 3-D DEM scalability and memory usage is derived. Many performance-critical implementation details are managed optimally to achieve high performance and scalability, such as: minimizing communication overhead, maintaining dynamic load balance, handling particle migrations across block borders, transmitting C++ dynamic objects of particles between MPI processes efficiently, eliminating redundant contact information between adjacent MPI processes. The code executes on multiple US Department of Defense (DoD) supercomputers and tests up to 2048 compute nodes for simulating 10 million three-axis ellipsoidal particles. Performance analyses of the code including speedup, efficiency, scalability, and granularity across five orders of magnitude of simulation scale (number of particles) are provided, and they demonstrate high speedup and excellent scalability. It is also discovered that communication time is a decreasing function of the number of compute nodes in strong scaling measurements. The code's capability of simulating a large number of complex-shaped particles on modern supercomputers will be of value in both laboratory studies on micromechanical properties of granular materials and many realistic engineering applications involving granular materials.
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
NASA Technical Reports Server (NTRS)
Mullins, D. W., Jr.; Senaratne, N.; Lacey, J. C., Jr.
1984-01-01
In the present paper, a report is presented on the effect of pH and carbonate on the hydrolysis rate constants of N-blocked and free aminoacyl adenylate anhydrides. Whereas the hydrolysis of free aminoacyl adenylates seems principally catalyzed by OH(-), the hydrolysis of the N-blocked species is also catalyzed by H(+), giving this compound a U-shaped hydrolysis vs. pH curve. Furthermore, at pH's less than 8, carbonate has an extreme catalytic effect on the hydrolysis of free aminoacyl-AMP anhydride, but essentially no effect on the hydrolysis of N-blocked aminoacyl-AMP anhydride. Furthermore, the N-blocked aminoacyl-AMP anhydride is a very efficient generator of peptides using free glycine as acceptor. The possible significance of the observations to prebiological peptide synthesis is discussed.
NASA Astrophysics Data System (ADS)
Gutiérrez Marcantoni, L. F.; Tamagno, J.; Elaskar, S.
2017-10-01
A new solver developed within the framework of OpenFOAM 2.3.0, called rhoCentralRfFoam which can be interpreted like an evolution of rhoCentralFoam, is presented. Its use, performing numerical simulations on initiation and propagation of planar detonation waves in combustible mixtures H2-Air and H2-O2-Ar, is described. Unsteady one dimensional (1D) Euler equations coupled with sources to take into account chemical activity, are numerically solved using the Kurganov, Noelle and Petrova second order scheme in a domain discretized with finite volumes. The computational code can work with any number of species and its corresponding reactions, but here it was tested with 13 chemically active species (one species inert), and 33 elementary reactions. A gaseous igniter which acts like a shock-tube driver, and powerful enough to generate a strong shock capable of triggering exothermic chemical reactions in fuel mixtures, is used to start planar detonations. The following main aspects of planar detonations are here, treated: induction time of combustible mixtures cited above and required mesh resolutions; convergence of overdriven detonations to Chapman-Jouguet states; detonation structure (ZND model); and the use of reflected shocks to determine induction times experimentally. The rhoCentralRfFoam code was verified comparing numerical results and it was validated, through analytical results and experimental data.
Phase equilibria computations of multicomponent mixtures at specified internal energy and volume
NASA Astrophysics Data System (ADS)
Myint, Philip C.; Nichols, Albert L., III; Springer, H. Keo
2017-06-01
Hydrodynamic simulation codes for high-energy density science applications often use internal energy and volume as their working variables. As a result, the codes must determine the thermodynamic state that corresponds to the specified energy and volume by finding the global maximum in entropy. This task is referred to as the isoenergetic-isochoric flash. Solving it for multicomponent mixtures is difficult because one must find not only the temperature and pressure consistent with the energy and volume, but also the number of phases present and the composition of the phases. The few studies on isoenergetic-isochoric flash that currently exist all require the evaluation of many derivatives that can be tedious to implement. We present an alternative approach that is based on a derivative-free method: particle swarm optimization. The global entropy maximum is found by running several instances of particle swarm optimization over different sets of randomly selected points in the search space. For verification, we compare the predicted temperature and pressure to results from the related, but simpler problem of isothermal-isobaric flash. All of our examples involve the equation of state we have recently developed for multiphase mixtures of the energetic materials HMX, RDX, and TNT. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.
Labeled Nucleoside Triphosphates with Reversibly Terminating Aminoalkoxyl Groups
Hutter, Daniel; Kim, Myong-Jung; Karalkar, Nilesh; Leal, Nicole A.; Chen, Fei; Guggenheim, Evan; Visalakshi, Visa; Olejnik, Jerzy; Gordon, Steven; Benner, Steven A.
2013-01-01
Nucleoside triphosphates having a 3′-ONH2 blocking group have been prepared with and without fluorescent tags on their nucleobases. DNA polymerases were identified that accepted these, adding a single nucleotide to the 3′-end of a primer in a template-directed extension reaction that then stops. Nitrite chemistry was developed to cleave the 3′-ONH2 group under mild conditions to allow continued primer extension. Extension-cleavage-extension cycles in solution were demonstrated with untagged nucleotides and mixtures of tagged and untagged nucleotides. Multiple extension-cleavage-extension cycles were demonstrated on an Intelligent Bio-Systems Sequencer, showing the potential of the 3′-ONH2 blocking group in “next generation sequencing”. PMID:21128174
GPU COMPUTING FOR PARTICLE TRACKING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Song, Kai; Muriki, Krishna
2011-03-25
This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2010-05-01
In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.
15 CFR 30.37 - Miscellaneous exemptions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... of Electronic Export Information § 30.37 Miscellaneous exemptions. Filing EEI is not required for the...), exports of commodities where the value of the commodities shipped from one USPPI to one consignee on a... classification codes regardless of the total shipment value. In instances where a shipment contains a mixture of...
English Code Switching in Indonesian Language
ERIC Educational Resources Information Center
Setiawan, Dedy
2016-01-01
There is a growing phenomenon, worldwide, of inserting English words, phrases or expressions, into the local language: this is part of the acceptance of English as current world language. Indonesia is experiencing the use of this mixture of language when using either their own Indonesian or local language; English words, phrases and expressions…
Programmable and Shape-Memorizing Information Carriers.
Li, Wenbing; Liu, Yanju; Leng, Jinsong
2017-12-27
Shape memory polymers (SMPs) are expected to play more and more important roles in space-deployable structures, smart actuators, and other high-tech areas. Nevertheless, because of the difficulties in fabrication and the programmability of temporary shape recovery, SMPs have not yet been widely applied in real fields. It is ideal to incorporate the different independent functional building blocks into a material. Herein, we designed a simple method to incorporate four functional building blocks: a neat epoxy-based shape memory (neat SMEP) resin, an SMEP composited with Fe 3 O 4 (SMEP-Fe 3 O 4 ), an SMEP composited with multiwalled carbon nanotubes, and an SMEP composited with p-aminodiphenylimide into a multicomposite, in which the four region surfaces could be programmed with different language code patterns according to a preset command by imprint lithography. Then, we aimed to reprogram the initially raised code patterns into temporary flat patterns using programming mold that, when triggered by a preset stimulus process such as an alternating magnetic field, radiofrequency field, 365 nm UV, and direct heating, could transform these language codes into the information passed by the customer. The concept introduced here will be applied to other available SMPs and provide a practical method to realize the information delivery.
Piecemeal Buildup of the Genetic Code, Ribosomes, and Genomes from Primordial tRNA Building Blocks
Caetano-Anollés, Derek; Caetano-Anollés, Gustavo
2016-01-01
The origin of biomolecular machinery likely centered around an ancient and central molecule capable of interacting with emergent macromolecular complexity. tRNA is the oldest and most central nucleic acid molecule of the cell. Its co-evolutionary interactions with aminoacyl-tRNA synthetase protein enzymes define the specificities of the genetic code and those with the ribosome their accurate biosynthetic interpretation. Phylogenetic approaches that focus on molecular structure allow reconstruction of evolutionary timelines that describe the history of RNA and protein structural domains. Here we review phylogenomic analyses that reconstruct the early history of the synthetase enzymes and the ribosome, their interactions with RNA, and the inception of amino acid charging and codon specificities in tRNA that are responsible for the genetic code. We also trace the age of domains and tRNA onto ancient tRNA homologies that were recently identified in rRNA. Our findings reveal a timeline of recruitment of tRNA building blocks for the formation of a functional ribosome, which holds both the biocatalytic functions of protein biosynthesis and the ability to store genetic memory in primordial RNA genomic templates. PMID:27918435
Piecemeal Buildup of the Genetic Code, Ribosomes, and Genomes from Primordial tRNA Building Blocks.
Caetano-Anollés, Derek; Caetano-Anollés, Gustavo
2016-12-02
The origin of biomolecular machinery likely centered around an ancient and central molecule capable of interacting with emergent macromolecular complexity. tRNA is the oldest and most central nucleic acid molecule of the cell. Its co-evolutionary interactions with aminoacyl-tRNA synthetase protein enzymes define the specificities of the genetic code and those with the ribosome their accurate biosynthetic interpretation. Phylogenetic approaches that focus on molecular structure allow reconstruction of evolutionary timelines that describe the history of RNA and protein structural domains. Here we review phylogenomic analyses that reconstruct the early history of the synthetase enzymes and the ribosome, their interactions with RNA, and the inception of amino acid charging and codon specificities in tRNA that are responsible for the genetic code. We also trace the age of domains and tRNA onto ancient tRNA homologies that were recently identified in rRNA. Our findings reveal a timeline of recruitment of tRNA building blocks for the formation of a functional ribosome, which holds both the biocatalytic functions of protein biosynthesis and the ability to store genetic memory in primordial RNA genomic templates.
NASA Astrophysics Data System (ADS)
Huang, Shaowei; Baba, Ken-Ichi; Murata, Masayuki; Kitayama, Ken-Ichi
2006-12-01
In traditional lambda-based multigranularity optical networks, a lambda is always treated as the basic routing unit, resulting in low wavelength utilization. On the basis of optical code division multiplexing (OCDM) technology, a novel OCDM-based multigranularity optical cross-connect (MG-OXC) is proposed. Compared with the traditional lambda-based MG-OXC, its switching capability has been extended to support fiber switching, waveband switching, lambda switching, and OCDM switching. In a network composed of OCDM-based MG-OXCs, a single wavelength can be shared by distinct label switched paths (LSPs) called OCDM-LSPs, and OCDM-LSP switching can be implemented in the optical domain. To improve the network flexibility for an OCDM-LSP provisioning, two kinds of switches enabling hybrid optical code (OC)-wavelength conversion are designed. Simulation results indicate that a blocking probability reduction of 2 orders can be obtained by deploying only five OCs to a single wavelength. Furthermore, compared with time-division-multiplexing LSP (TDM-LSP), owing to the asynchronous accessibility and the OC conversion, OCDM-LSPs have been shown to permit a simpler switch architecture and achieve better blocking performance than TDM-LSPs.
Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8
NASA Astrophysics Data System (ADS)
Sison, Virgilio; Remillion, Monica
2017-10-01
Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
NASA Astrophysics Data System (ADS)
Wang, Qian; Wu, Jianning; Meng, Guihua; Wang, Yixi; Liu, Zhiyong; Guo, Xuhong
2018-06-01
The wetting materials with the ability of controllable oil/water separation have drawn more and more public attention. In this article, the novel cotton fabric (CF) with pH controlled wettability transition was designed by a simple, environmentally friendly coating copolymer/SiO2 nanoparticles, poly(heptadecafluorodecyl methacrylate- co-3-trimethoxysilylpropyl methacrylate- co-2-vinilpiridine) (PHDFDMA- co-PTMSPMA- co-P2VP). Furthermore, the structures and morphologies of coated CF were confirmed by Fourier transform infrared spectroscopy (FTIR), NMR, GPC, scanning electron microscopy (SEM), and X-ray photoelectron spectroscopy (XPS). The coated CF exhibits switchable wettability between superhydrophobicity and superhydrophilicity via adjusting pH value. When the coated CF is placed in the neutral aqueous (pH = 7.0), it is superhydrophobic in the air and superoleophilic. It allows oil to go through but blocking water. However, in acidic aqueous environment (pH = 3.0), it turns superhydrophilic and underwater superoleophobic, which allows water to penetrate but blocking oil. Therefore, the coated CF could be applied to separate oil/water mixtures, ternary oil/water/water mixtures continuously and different surfactant stabilized emulsions (oil-in-water, water-in-oil) and displays the superior separation capacity for oil-water mixtures with a high efficiency of 99.8%. Moreover, the cycling tests demonstrate that the coated CF possesses excellent recyclability and durability. Such an eminent, controllable water/oil permeation feature makes coated CF could be selected as an ideal candidate for oil/water separation.
Numerical computation of space shuttle orbiter flow field
NASA Technical Reports Server (NTRS)
Tannehill, John C.
1988-01-01
A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.
Force field development with GOMC, a fast new Monte Carlo molecular simulation code
NASA Astrophysics Data System (ADS)
Mick, Jason Richard
In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Majumdar, Alok; Tiller, Bruce
2001-01-01
A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program SINDA/G. The flow code, GFSSP, is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasisteady (unsteady solid, steady fluid) conjugate heat transfer modeling.
Advanced imaging communication system
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1977-01-01
Key elements of system are imaging and nonimaging sensors, data compressor/decompressor, interleaved Reed-Solomon block coder, convolutional-encoded/Viterbi-decoded telemetry channel, and Reed-Solomon decoding. Data compression provides efficient representation of sensor data, and channel coding improves reliability of data transmission.
Metal poisons for criticality in waste streams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williamson, T.G.; Goslen, A.Q.
1996-12-31
Many of the wastes from processing fissile materials contain metals that may serve as neutron poisons. It would be advantageous to the criticality evaluation of these wastes to demonstrate that the poisons remain with the fissile materials and to demonstrate an always safe poison-to-fissile ratio. The first task, demonstrating that the materials stay together, is the job of the chemist; the second, calculating an always safe ratio, is an object of this paper. In an earlier study, the authors demonstrated safe ratios for iron, manganese, and chromium oxides to {sup 235}U. In these studies, the Hansen-Roach 16-group cross sections weremore » used with the Savannah River site code HRXN. Multiplication factors were computed, and safe ratios were defined such that the adjusted neutron multiplication factors (k values) were <0.95. These safe weight ratios were Fe:{sup 235}U - 77:1; Mn:{sup 235}U - 30:1; and Cr:{sup 235}U - 52:1. Palmer has shown that for certain mixtures of aluminum, iron, and zirconium with {sup 235}U, the computed infinite multiplication factors may differ by as much as 20% with different cross sections and processing systems. Parks et al. have further studied these mixtures and state, {open_quotes}...these metal/uranium mixtures are very sensitive to the metal cross-section data in the intermediate-energy range and the processing methods that are used.{close_quotes} They conclude with a call for more experimental data. The purpose of this study is to reexamine earlier work with cross sections and processing codes used at Westinghouse Savannah River Company today. This study will focus on {sup 235}U mixtures with iron, manganese and chromium. Sodium will be included in the list of poisons because it is abundant in many of the waste materials.« less
Chain exchange in triblock copolymer micelles
NASA Astrophysics Data System (ADS)
Lu, Jie; Lodge, Timothy; Bates, Frank
2015-03-01
Block polymer micelles offer a host of technological applications including drug delivery, viscosity modification, toughening of plastics, and colloidal stabilization. Molecular exchange between micelles directly influences the stability, structure and access to an equilibrium state in such systems and this property recently has been shown to be extraordinarily sensitive to the core block molecular weight in diblock copolymers. The dependence of micelle chain exchange dynamics on molecular architecture has not been reported. The present work conclusively addresses this issue using time-resolved small-angle neutron scattering (TR-SANS) applied to complimentary S-EP-S and EP-S-EP triblock copolymers dissolved in squalane, a selective solvent for the EP blocks, where S and EP refer to poly(styrene) and poly(ethylenepropylene), respectively. Following the overall SANS intensity as a function of time from judiciously deuterium labelled polymer and solvent mixtures directly probes the rate of molecular exchange. Remarkably, the two triblocks display exchange rates that differ by approximately ten orders of magnitude, even though the solvophobic S blocks are of comparable size. This discovery is considered in the context of a model that successfully explains S-EP diblock exchange dynamics.
Recycling of waste spent catalyst in road construction and masonry blocks.
Taha, Ramzi; Al-Kamyani, Zahran; Al-Jabri, Khalifa; Baawain, Mahad; Al-Shamsi, Khalid
2012-08-30
Waste spent catalyst is generated in Oman as a result of the cracking process of petroleum oil in the Mina Al-Fahl and Sohar Refineries. The disposal of spent catalyst is of a major concern to oil refineries. Stabilized spent catalyst was evaluated for use in road construction as a whole replacement for crushed aggregates in the sub-base and base layers and as a partial replacement for Portland cement in masonry blocks manufacturing. Stabilization is necessary as the waste spent catalyst exists in a powder form and binders are needed to attain the necessary strength required to qualify its use in road construction. Raw spent catalyst was also blended with other virgin aggregates, as a sand or filler replacement, for use in road construction. Compaction, unconfined compressive strength and leaching tests were performed on the stabilized mixtures. For its use in masonry construction, blocks were tested for unconfined compressive strength at various curing periods. Results indicate that the spent catalyst has a promising potential for use in road construction and masonry blocks without causing any negative environmental impacts. Copyright © 2012 Elsevier B.V. All rights reserved.
Experimental QR code optical encryption: noise-free data recovering.
Barrera, John Fredy; Mira-Agudelo, Alejandro; Torroba, Roberto
2014-05-15
We report, to our knowledge for the first time, the experimental implementation of a quick response (QR) code as a "container" in an optical encryption system. A joint transform correlator architecture in an interferometric configuration is chosen as the experimental scheme. As the implementation is not possible in a single step, a multiplexing procedure to encrypt the QR code of the original information is applied. Once the QR code is correctly decrypted, the speckle noise present in the recovered QR code is eliminated by a simple digital procedure. Finally, the original information is retrieved completely free of any kind of degradation after reading the QR code. Additionally, we propose and implement a new protocol in which the reception of the encrypted QR code and its decryption, the digital block processing, and the reading of the decrypted QR code are performed employing only one device (smartphone, tablet, or computer). The overall method probes to produce an outcome far more attractive to make the adoption of the technique a plausible option. Experimental results are presented to demonstrate the practicality of the proposed security system.
Are mammal olfactory signals hiding right under our noses?
NASA Astrophysics Data System (ADS)
Apps, Peter James
2013-06-01
Chemical communication via olfactory semiochemicals plays a central role in the social behaviour and reproduction of mammals, but even after four decades of research, only a few mammal semiochemicals have been chemically characterized. Expectations that mammal chemical signals are coded by quantitative relationships among multiple components have persisted since the earliest studies of mammal semiochemistry, and continue to direct research strategies. Nonetheless, the chemistry of mammal excretions and secretions and the characteristics of those semiochemicals that have been identified show that mammal semiochemicals are as likely to be single compounds as to be mixtures, and are as likely to be coded by the presence and absence of chemical compounds as by their quantities. There is very scant support for the view that mammal semiochemicals code signals as specific ratios between components, and no evidence that they depend on a Gestalt or a chemical image. Of 31 semiochemicals whose chemical composition is known, 15 have a single component and 16 are coded by presence/absence, one may depend on a ratio between two compounds and none of them are chemical images. The expectation that mammal chemical signals have multiple components underpins the use of multivariate statistical analyses of chromatographic data, but the ways in which multivariate statistics are commonly used to search for active mixtures leads to single messenger compounds and signals that are sent by the presence and absence of compounds being overlooked. Research on mammal semiochemicals needs to accommodate the possibility that simple qualitative differences are no less likely than complex quantitative differences to encode chemical signals.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less