Generalized type II hybrid ARQ scheme using punctured convolutional coding
NASA Astrophysics Data System (ADS)
Kallel, Samir; Haccoun, David
1990-11-01
A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
Hybrid services efficient provisioning over the network coding-enabled elastic optical networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Gu, Rentao; Ji, Yuefeng; Kavehrad, Mohsen
2017-03-01
As a variety of services have emerged, hybrid services have become more common in real optical networks. Although the elastic spectrum resource optimizations over the elastic optical networks (EONs) have been widely investigated, little research has been carried out on the hybrid services of the routing and spectrum allocation (RSA), especially over the network coding-enabled EON. We investigated the RSA for the unicast service and network coding-based multicast service over the network coding-enabled EON with the constraints of time delay and transmission distance. To address this issue, a mathematical model was built to minimize the total spectrum consumption for the hybrid services over the network coding-enabled EON under the constraints of time delay and transmission distance. The model guarantees different routing constraints for different types of services. The immediate nodes over the network coding-enabled EON are assumed to be capable of encoding the flows for different kinds of information. We proposed an efficient heuristic algorithm of the network coding-based adaptive routing and layered graph-based spectrum allocation algorithm (NCAR-LGSA). From the simulation results, NCAR-LGSA shows highly efficient performances in terms of the spectrum resources utilization under different network scenarios compared with the benchmark algorithms.
NASA Astrophysics Data System (ADS)
Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.
2006-01-01
In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.
Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R
2008-05-15
A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.
Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
2014-02-01
Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model.
Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
2014-01-01
Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model. PMID:25485311
Transcriptional mapping of the ribosomal RNA region of mouse L-cell mitochondrial DNA.
Nagley, P; Clayton, D A
1980-01-01
The map positions in mouse mitochondrial DNA of the two ribosomal RNA genes and adjacent genes coding several small transcripts have been determined precisely by application of a procedure in which DNA-RNA hybrids have been subjected to digestion by S1 nuclease under conditions of varying severity. Digestion of the DNA-RNA hybrids with S1 nuclease yielded a series of species which were shown to contain ribosomal RNA molecules together with adjacent transcripts hybridized conjointly to a continuous segment of mitochondrial DNA. There is one small transcript about 60 bases long whose gene adjoins the sequences coding the 5'-end of the small ribosomal RNA (950 bases) and which lies approximately 200 nucleotides from the D-loop origin of heavy strand mitochondrial DNA synthesis. An 80-base transcript lies between the small and large ribosomal RNA genes, and genes for two further short transcript (each about 80 bases in length) abut the sequences coding the 3'-end of the large ribosomal RNA (approximately 1500 bases). The ability to isolate a discrete DNA-RNA hybrid species approximately 2700 base pairs in length containing all these transcripts suggests that there can be few nucleotides in this region of mouse mitochondrial DNA which are not represented as stable RNA species. Images PMID:6253898
Noise suppression system of OCDMA with spectral/spatial 2D hybrid code
NASA Astrophysics Data System (ADS)
Matem, Rima; Aljunid, S. A.; Junita, M. N.; Rashidi, C. B. M.; Shihab Aqrab, Israa
2017-11-01
In this paper, we propose a novel 2D spectral/spatial hybrid code based on 1D ZCC and 1D MD where the both present a zero cross correlation property analyzed and the influence of the noise of optical as Phase Induced Intensity Noise (PIIN), shot and thermal noise. This new code is shown effectively to mitigate the PIIN and suppresses MAI. Using 2D ZCC/MD code the performance of the system can be improved in term of as well as to support more simultaneous users compared of the 2D FCC/MDW and 2D DPDC codes.
2012-02-01
code) 01/02/2012 FINAL 15/11/2008 - 15/11/2011 High-speed, Low Voltage, Miniature Electro - optic Modulators Based on Hybrid Photonic-Crystal/Polymer... optic modulator, silicon photonics, integrated optics, electro - optic polymer, avionics, optical communications, sol-gel, nanotechnology U U U UU 25...2011 Program Manager: Dr. Charles Y-C Lee High-speed, Low Voltage, Miniature Electro - optic Modulators Based on Hybrid Photonic-Crystal/Polymer/Sol
NASA Astrophysics Data System (ADS)
Gao, Fengtao; Wei, Min; Zhu, Ying; Guo, Hua; Chen, Songlin; Yang, Guanpin
2017-06-01
This study presents the complete mitochondrial genome of the hybrid Epinephelus moara♀× Epinephelus lanceolatus♂. The genome is 16886 bp in length, and contains 13 protein-coding genes, 2 rRNA genes, 22 tRNA genes, a light-strand replication origin and a control region. Additionally, phylogenetic analysis based on the nucleotide sequences of 13 conserved protein-coding genes using the maximum likelihood method indicated that the mitochondrial genome is maternally inherited. This study presents genomic data for studying phylogenetic relationships and breeding of hybrid Epinephelinae.
Pian, Cong; Zhang, Guangle; Chen, Zhi; Chen, Yuanyuan; Zhang, Jin; Yang, Tao; Zhang, Liangyun
2016-01-01
As a novel class of noncoding RNAs, long noncoding RNAs (lncRNAs) have been verified to be associated with various diseases. As large scale transcripts are generated every year, it is significant to accurately and quickly identify lncRNAs from thousands of assembled transcripts. To accurately discover new lncRNAs, we develop a classification tool of random forest (RF) named LncRNApred based on a new hybrid feature. This hybrid feature set includes three new proposed features, which are MaxORF, RMaxORF and SNR. LncRNApred is effective for classifying lncRNAs and protein coding transcripts accurately and quickly. Moreover,our RF model only requests the training using data on human coding and non-coding transcripts. Other species can also be predicted by using LncRNApred. The result shows that our method is more effective compared with the Coding Potential Calculate (CPC). The web server of LncRNApred is available for free at http://mm20132014.wicp.net:57203/LncRNApred/home.jsp.
New t-gap insertion-deletion-like metrics for DNA hybridization thermodynamic modeling.
D'yachkov, Arkadii G; Macula, Anthony J; Pogozelski, Wendy K; Renz, Thomas E; Rykov, Vyacheslav V; Torney, David C
2006-05-01
We discuss the concept of t-gap block isomorphic subsequences and use it to describe new abstract string metrics that are similar to the Levenshtein insertion-deletion metric. Some of the metrics that we define can be used to model a thermodynamic distance function on single-stranded DNA sequences. Our model captures a key aspect of the nearest neighbor thermodynamic model for hybridized DNA duplexes. One version of our metric gives the maximum number of stacked pairs of hydrogen bonded nucleotide base pairs that can be present in any secondary structure in a hybridized DNA duplex without pseudoknots. Thermodynamic distance functions are important components in the construction of DNA codes, and DNA codes are important components in biomolecular computing, nanotechnology, and other biotechnical applications that employ DNA hybridization assays. We show how our new distances can be calculated by using a dynamic programming method, and we derive a Varshamov-Gilbert-like lower bound on the size of some of codes using these distance functions as constraints. We also discuss software implementation of our DNA code design methods.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
A Network Coding Based Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks
Wang, Hao; Wang, Shilian; Zhang, Eryang; Zou, Jianbin
2016-01-01
Underwater Acoustic Sensor Networks (UASNs) have attracted increasing interest in recent years due to their extensive commercial and military applications. However, the harsh underwater channel causes many challenges for the design of reliable underwater data transport protocol. In this paper, we propose an energy efficient data transport protocol based on network coding and hybrid automatic repeat request (NCHARQ) to ensure reliability, efficiency and availability in UASNs. Moreover, an adaptive window length estimation algorithm is designed to optimize the throughput and energy consumption tradeoff. The algorithm can adaptively change the code rate and can be insensitive to the environment change. Extensive simulations and analysis show that NCHARQ significantly reduces energy consumption with short end-to-end delay. PMID:27618044
Solar wind interaction with Venus and Mars in a parallel hybrid code
NASA Astrophysics Data System (ADS)
Jarvinen, Riku; Sandroos, Arto
2013-04-01
We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.
Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F
2013-10-15
We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.
Greenberg, Jay R.; Perry, Robert P.
1971-01-01
The relationship of the DNA sequences from which polyribosomal messenger RNA (mRNA) and heterogeneous nuclear RNA (NRNA) of mouse L cells are transcribed was investigated by means of hybridization kinetics and thermal denaturation of the hybrids. Hybridization was performed in formamide solutions at DNA excess. Under these conditions most of the hybridizing mRNA and NRNA react at values of Dot (DNA concentration multiplied by time) expected for RNA transcribed from the nonrepeated or rarely repeated fraction of the genome. However, a fraction of both mRNA and NRNA hybridize at values of Dot about 10,000 times lower, and therefore must be transcribed from highly redundant DNA sequences. The fraction of NRNA hybridizing to highly repeated sequences is about 1.7 times greater than the corresponding fraction of mRNA. The hybrids formed by the rapidly reacting fractions of both NRNA and mRNA melt over a narrow temperature range with a midpoint about 11°C below that of native L cell DNA. This indicates that these hybrids consist of partially complementary sequences with approximately 11% mismatching of bases. Hybrids formed by the slowly reacting fraction of NRNA melt within 4°–6°C of native DNA, indicating very little, if any, mismatching of bases. Hybrids of the slowly reacting components of mRNA, formed under conditions of sufficiently low RNA input, have a high thermal stability, similar to that observed for hybrids of the slowly reacting NRNA component. However, when higher inputs of mRNA are used, hybrids are formed which have a strikingly lower thermal stability. This observation can be explained by assuming that there is sufficient similarity among the relatively rare DNA sequences coding for mRNA so that under hybridization conditions, in which these DNA sequences are not truly in excess, reversible hybrids exhibiting a considerable amount of mispairing are formed. The fact that a comparable phenomenon has not been observed for NRNA may mean that there is less similarity among the relatively rare DNA sequences coding for NRNA than there is among the rare sequences coding for mRNA. PMID:4999767
Complementary Reliability-Based Decodings of Binary Linear Block Codes
NASA Technical Reports Server (NTRS)
Fossorier, Marc P. C.; Lin, Shu
1997-01-01
This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.
Scalable hybrid computation with spikes.
Sarpeshkar, Rahul; O'Halloran, Micah
2002-09-01
We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured.
A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.
Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary
2017-12-01
Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2015-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2014-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
[INVITED] Luminescent QR codes for smart labelling and sensing
NASA Astrophysics Data System (ADS)
Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.
2018-05-01
QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.
A motion compensation technique using sliced blocks and its application to hybrid video coding
NASA Astrophysics Data System (ADS)
Kondo, Satoshi; Sasai, Hisao
2005-07-01
This paper proposes a new motion compensation method using "sliced blocks" in DCT-based hybrid video coding. In H.264 ? MPEG-4 Advance Video Coding, a brand-new international video coding standard, motion compensation can be performed by splitting macroblocks into multiple square or rectangular regions. In the proposed method, on the other hand, macroblocks or sub-macroblocks are divided into two regions (sliced blocks) by an arbitrary line segment. The result is that the shapes of the segmented regions are not limited to squares or rectangles, allowing the shapes of the segmented regions to better match the boundaries between moving objects. Thus, the proposed method can improve the performance of the motion compensation. In addition, adaptive prediction of the shape according to the region shape of the surrounding macroblocks can reduce overheads to describe shape information in the bitstream. The proposed method also has the advantage that conventional coding techniques such as mode decision using rate-distortion optimization can be utilized, since coding processes such as frequency transform and quantization are performed on a macroblock basis, similar to the conventional coding methods. The proposed method is implemented in an H.264-based P-picture codec and an improvement in bit rate of 5% is confirmed in comparison with H.264.
NASA Technical Reports Server (NTRS)
Coakley, T. J.; Hsieh, T.
1985-01-01
Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saad, Tony; Sutherland, James C.
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
Saad, Tony; Sutherland, James C.
2016-05-04
To address the coding and software challenges of modern hybrid architectures, we propose an approach to multiphysics code development for high-performance computing. This approach is based on using a Domain Specific Language (DSL) in tandem with a directed acyclic graph (DAG) representation of the problem to be solved that allows runtime algorithm generation. When coupled with a large-scale parallel framework, the result is a portable development framework capable of executing on hybrid platforms and handling the challenges of multiphysics applications. In addition, we share our experience developing a code in such an environment – an effort that spans an interdisciplinarymore » team of engineers and computer scientists.« less
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
Multiplexed Detection of Cytokines Based on Dual Bar-Code Strategy and Single-Molecule Counting.
Li, Wei; Jiang, Wei; Dai, Shuang; Wang, Lei
2016-02-02
Cytokines play important roles in the immune system and have been regarded as biomarkers. While single cytokine is not specific and accurate enough to meet the strict diagnosis in practice, in this work, we constructed a multiplexed detection method for cytokines based on dual bar-code strategy and single-molecule counting. Taking interferon-γ (IFN-γ) and tumor necrosis factor-α (TNF-α) as model analytes, first, the magnetic nanobead was functionalized with the second antibody and primary bar-code strands, forming a magnetic nanoprobe. Then, through the specific reaction of the second antibody and the antigen that fixed by the primary antibody, sandwich-type immunocomplex was formed on the substrate. Next, the primary bar-code strands as amplification units triggered multibranched hybridization chain reaction (mHCR), producing nicked double-stranded polymers with multiple branched arms, which were served as secondary bar-code strands. Finally, the secondary bar-code strands hybridized with the multimolecule labeled fluorescence probes, generating enhanced fluorescence signals. The numbers of fluorescence dots were counted one by one for quantification with epi-fluorescence microscope. By integrating the primary and secondary bar-code-based amplification strategy and the multimolecule labeled fluorescence probes, this method displayed an excellent sensitivity with the detection limits were both 5 fM. Unlike the typical bar-code assay that the bar-code strands should be released and identified on a microarray, this method is more direct. Moreover, because of the selective immune reaction and the dual bar-code mechanism, the resulting method could detect the two targets simultaneously. Multiple analysis in human serum was also performed, suggesting that our strategy was reliable and had a great potential application in early clinical diagnosis.
NASA Astrophysics Data System (ADS)
Yan, Xing-Yu; Gong, Li-Hua; Chen, Hua-Ying; Zhou, Nan-Run
2018-05-01
A theoretical quantum key distribution scheme based on random hybrid quantum channel with EPR pairs and GHZ states is devised. In this scheme, EPR pairs and tripartite GHZ states are exploited to set up random hybrid quantum channel. Only one photon in each entangled state is necessary to run forth and back in the channel. The security of the quantum key distribution scheme is guaranteed by more than one round of eavesdropping check procedures. It is of high capacity since one particle could carry more than two bits of information via quantum dense coding.
Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.
Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur
2016-07-25
In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.
Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers
NASA Astrophysics Data System (ADS)
Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi
2017-10-01
Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.
Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.
Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing
2009-03-06
Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.
Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller
NASA Astrophysics Data System (ADS)
Perdikis, S.; Leeb, R.; Williamson, J.; Ramsay, A.; Tavella, M.; Desideri, L.; Hoogerwerf, E.-J.; Al-Khodairy, A.; Murray-Smith, R.; Millán, J. d. R.
2014-06-01
Objective. While brain-computer interfaces (BCIs) for communication have reached considerable technical maturity, there is still a great need for state-of-the-art evaluation by the end-users outside laboratory environments. To achieve this primary objective, it is necessary to augment a BCI with a series of components that allow end-users to type text effectively. Approach. This work presents the clinical evaluation of a motor imagery (MI) BCI text-speller, called BrainTree, by six severely disabled end-users and ten able-bodied users. Additionally, we define a generic model of code-based BCI applications, which serves as an analytical tool for evaluation and design. Main results. We show that all users achieved remarkable usability and efficiency outcomes in spelling. Furthermore, our model-based analysis highlights the added value of human-computer interaction techniques and hybrid BCI error-handling mechanisms, and reveals the effects of BCI performances on usability and efficiency in code-based applications. Significance. This study demonstrates the usability potential of code-based MI spellers, with BrainTree being the first to be evaluated by a substantial number of end-users, establishing them as a viable, competitive alternative to other popular BCI spellers. Another major outcome of our model-based analysis is the derivation of a 80% minimum command accuracy requirement for successful code-based application control, revising upwards previous estimates attempted in the literature.
Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller.
Perdikis, S; Leeb, R; Williamson, J; Ramsay, A; Tavella, M; Desideri, L; Hoogerwerf, E-J; Al-Khodairy, A; Murray-Smith, R; Millán, J D R
2014-06-01
While brain-computer interfaces (BCIs) for communication have reached considerable technical maturity, there is still a great need for state-of-the-art evaluation by the end-users outside laboratory environments. To achieve this primary objective, it is necessary to augment a BCI with a series of components that allow end-users to type text effectively. This work presents the clinical evaluation of a motor imagery (MI) BCI text-speller, called BrainTree, by six severely disabled end-users and ten able-bodied users. Additionally, we define a generic model of code-based BCI applications, which serves as an analytical tool for evaluation and design. We show that all users achieved remarkable usability and efficiency outcomes in spelling. Furthermore, our model-based analysis highlights the added value of human-computer interaction techniques and hybrid BCI error-handling mechanisms, and reveals the effects of BCI performances on usability and efficiency in code-based applications. This study demonstrates the usability potential of code-based MI spellers, with BrainTree being the first to be evaluated by a substantial number of end-users, establishing them as a viable, competitive alternative to other popular BCI spellers. Another major outcome of our model-based analysis is the derivation of a 80% minimum command accuracy requirement for successful code-based application control, revising upwards previous estimates attempted in the literature.
Seemann, M D; Gebicke, K; Luboldt, W; Albes, J M; Vollmar, J; Schäfer, J F; Beinert, T; Englmeier, K H; Bitzer, M; Claussen, C D
2001-07-01
The aim of this study was to demonstrate the possibilities of a hybrid rendering method, the combination of a color-coded surface and volume rendering method, with the feasibility of performing surface-based virtual endoscopy with different representation models in the operative and interventional therapy control of the chest. In 6 consecutive patients with partial lung resection (n = 2) and lung transplantation (n = 4) a thin-section spiral computed tomography of the chest was performed. The tracheobronchial system and the introduced metallic stents were visualized using a color-coded surface rendering method. The remaining thoracic structures were visualized using a volume rendering method. For virtual bronchoscopy, the tracheobronchial system was visualized using a triangle surface model, a shaded-surface model and a transparent shaded-surface model. The hybrid 3D visualization uses the advantages of both the color-coded surface and volume rendering methods and facilitates a clear representation of the tracheobronchial system and the complex topographical relationship of morphological and pathological changes without loss of diagnostic information. Performing virtual bronchoscopy with the transparent shaded-surface model facilitates a reasonable to optimal, simultaneous visualization and assessment of the surface structure of the tracheobronchial system and the surrounding mediastinal structures and lesions. Hybrid rendering relieve the morphological assessment of anatomical and pathological changes without the need for time-consuming detailed analysis and presentation of source images. Performing virtual bronchoscopy with a transparent shaded-surface model offers a promising alternative to flexible fiberoptic bronchoscopy.
Simulation of the hybrid and steady state advanced operating modes in ITER
NASA Astrophysics Data System (ADS)
Kessel, C. E.; Giruzzi, G.; Sips, A. C. C.; Budny, R. V.; Artaud, J. F.; Basiuk, V.; Imbeaux, F.; Joffrin, E.; Schneider, M.; Murakami, M.; Luce, T.; St. John, Holger; Oikawa, T.; Hayashi, N.; Takizuka, T.; Ozeki, T.; Na, Y.-S.; Park, J. M.; Garcia, J.; Tucillo, A. A.
2007-09-01
Integrated simulations are performed to establish a physics basis, in conjunction with present tokamak experiments, for the operating modes in the International Thermonuclear Experimental Reactor (ITER). Simulations of the hybrid mode are done using both fixed and free-boundary 1.5D transport evolution codes including CRONOS, ONETWO, TSC/TRANSP, TOPICS and ASTRA. The hybrid operating mode is simulated using the GLF23 and CDBM05 energy transport models. The injected powers are limited to the negative ion neutral beam, ion cyclotron and electron cyclotron heating systems. Several plasma parameters and source parameters are specified for the hybrid cases to provide a comparison of 1.5D core transport modelling assumptions, source physics modelling assumptions, as well as numerous peripheral physics modelling. Initial results indicate that very strict guidelines will need to be imposed on the application of GLF23, for example, to make useful comparisons. Some of the variations among the simulations are due to source models which vary widely among the codes used. In addition, there are a number of peripheral physics models that should be examined, some of which include fusion power production, bootstrap current, treatment of fast particles and treatment of impurities. The hybrid simulations project to fusion gains of 5.6-8.3, βN values of 2.1-2.6 and fusion powers ranging from 350 to 500 MW, under the assumptions outlined in section 3. Simulations of the steady state operating mode are done with the same 1.5D transport evolution codes cited above, except the ASTRA code. In these cases the energy transport model is more difficult to prescribe, so that energy confinement models will range from theory based to empirically based. The injected powers include the same sources as used for the hybrid with the possible addition of lower hybrid. The simulations of the steady state mode project to fusion gains of 3.5-7, βN values of 2.3-3.0 and fusion powers of 290 to 415 MW, under the assumptions described in section 4. These simulations will be presented and compared with particular focus on the resulting temperature profiles, source profiles and peripheral physics profiles. The steady state simulations are at an early stage and are focused on developing a range of safety factor profiles with 100% non-inductive current.
Hybrid model for simulation of plasma jet injection in tokamak
NASA Astrophysics Data System (ADS)
Galkin, Sergei A.; Bogatu, I. N.
2016-10-01
Hybrid kinetic model of plasma treats the ions as kinetic particles and the electrons as charge neutralizing massless fluid. The model is essentially applicable when most of the energy is concentrated in the ions rather than in the electrons, i.e. it is well suited for the high-density hyper-velocity C60 plasma jet. The hybrid model separates the slower ion time scale from the faster electron time scale, which becomes disregardable. That is why hybrid codes consistently outperform the traditional PIC codes in computational efficiency, still resolving kinetic ions effects. We discuss 2D hybrid model and code with exact energy conservation numerical algorithm and present some results of its application to simulation of C60 plasma jet penetration through tokamak-like magnetic barrier. We also examine the 3D model/code extension and its possible applications to tokamak and ionospheric plasmas. The work is supported in part by US DOE DE-SC0015776 Grant.
Software Certification for Temporal Properties With Affordable Tool Qualification
NASA Technical Reports Server (NTRS)
Xia, Songtao; DiVito, Benedetto L.
2005-01-01
It has been recognized that a framework based on proof-carrying code (also called semantic-based software certification in its community) could be used as a candidate software certification process for the avionics industry. To meet this goal, tools in the "trust base" of a proof-carrying code system must be qualified by regulatory authorities. A family of semantic-based software certification approaches is described, each different in expressive power, level of automation and trust base. Of particular interest is the so-called abstraction-carrying code, which can certify temporal properties. When a pure abstraction-carrying code method is used in the context of industrial software certification, the fact that the trust base includes a model checker would incur a high qualification cost. This position paper proposes a hybrid of abstraction-based and proof-based certification methods so that the model checker used by a client can be significantly simplified, thereby leading to lower cost in tool qualification.
Hybrid 3D model for the interaction of plasma thruster plumes with nearby objects
NASA Astrophysics Data System (ADS)
Cichocki, Filippo; Domínguez-Vázquez, Adrián; Merino, Mario; Ahedo, Eduardo
2017-12-01
This paper presents a hybrid particle-in-cell (PIC) fluid approach to model the interaction of a plasma plume with a spacecraft and/or any nearby object. Ions and neutrals are modeled with a PIC approach, while electrons are treated as a fluid. After a first iteration of the code, the domain is split into quasineutral and non-neutral regions, based on non-neutrality criteria, such as the relative charge density and the Debye length-to-cell size ratio. At the material boundaries of the former quasineutral region, a dedicated algorithm ensures that the Bohm condition is met. In the latter non-neutral regions, the electron density and electric potential are obtained by solving the coupled electron momentum balance and Poisson equations. Boundary conditions for both the electric current and potential are finally obtained with a plasma sheath sub-code and an equivalent circuit model. The hybrid code is validated by applying it to a typical plasma plume-spacecraft interaction scenario, and the physics and capabilities of the model are finally discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai
We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less
Analysis of hybrid subcarrier multiplexing of OCDMA based on single photodiode detection
NASA Astrophysics Data System (ADS)
Ahmad, N. A. A.; Junita, M. N.; Aljunid, S. A.; Rashidi, C. B. M.; Endut, R.
2017-11-01
This paper analyzes the performance of subcarrier multiplexing (SCM) of spectral amplitude coding optical code multiple access (SAC-OCDMA) by applying Recursive Combinatorial (RC) code based on single photodiode detection (SPD). SPD is used in the receiver part to reduce the effect of multiple access interference (MAI) which contributes as a dominant noise in incoherent SAC-OCDMA systems. Results indicate that the SCM OCDMA network performance could be improved by using lower data rates and higher number of weight. Total number of users can also be enhanced by adding lower data rates and higher number of subcarriers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...
2016-06-01
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
ERIC Educational Resources Information Center
Sanchez-Munoz, Ana
2013-01-01
This study explores various linguistic strategies that characterize what is commonly referred to as "Spanglish"; namely, code-switching, code-mixing, borrowings and other language contact phenomena commonly employed by Chicana/o bilinguals. The analysis of linguistic features is based on creative pieces of writing produced by Chicana/o…
Trung, Le Quang; VAN Puyvelde, Karolien; Triest, Ludwig
2008-03-01
Consensus primers, based on exon sequences of the cyp73 gene family coding for cinnamate 4-hydroxylase (C4H) of the lignin biosynthesis pathway, were designed for the tetraploid willow species Salix alba and Salix fragilis. Diagnostic alleles at species level were observed among introns of three cyp73 genes and allowed unambiguous detection of the first generation and introgressed hybrids in populations. Progeny analysis of a female S. alba with a male introgressed hybrid confirmed the codominant inheritance of each intron. Sequences of the diagnostic alleles of both species were similar to those found in the hybrids. © 2007 The Authors.
Viewing hybrid systems as products of control systems and automata
NASA Technical Reports Server (NTRS)
Grossman, R. L.; Larson, R. G.
1992-01-01
The purpose of this note is to show how hybrid systems may be modeled as products of nonlinear control systems and finite state automata. By a hybrid system, we mean a network of consisting of continuous, nonlinear control system connected to discrete, finite state automata. Our point of view is that the automata switches between the control systems, and that this switching is a function of the discrete input symbols or letters that it receives. We show how a nonlinear control system may be viewed as a pair consisting of a bialgebra of operators coding the dynamics, and an algebra of observations coding the state space. We also show that a finite automata has a similar representation. A hybrid system is then modeled by taking suitable products of the bialgebras coding the dynamics and the observation algebras coding the state spaces.
NASA Technical Reports Server (NTRS)
Rathjen, K. A.
1977-01-01
A digital computer code CAVE (Conduction Analysis Via Eigenvalues), which finds application in the analysis of two dimensional transient heating of hypersonic vehicles is described. The CAVE is written in FORTRAN 4 and is operational on both IBM 360-67 and CDC 6600 computers. The method of solution is a hybrid analytical numerical technique that is inherently stable permitting large time steps even with the best of conductors having the finest of mesh size. The aerodynamic heating boundary conditions are calculated by the code based on the input flight trajectory or can optionally be calculated external to the code and then entered as input data. The code computes the network conduction and convection links, as well as capacitance values, given basic geometrical and mesh sizes, for four generations (leading edges, cooled panels, X-24C structure and slabs). Input and output formats are presented and explained. Sample problems are included. A brief summary of the hybrid analytical-numerical technique, which utilizes eigenvalues (thermal frequencies) and eigenvectors (thermal mode vectors) is given along with aerodynamic heating equations that have been incorporated in the code and flow charts.
Hybrid scheduling mechanisms for Next-generation Passive Optical Networks based on network coding
NASA Astrophysics Data System (ADS)
Zhao, Jijun; Bai, Wei; Liu, Xin; Feng, Nan; Maier, Martin
2014-10-01
Network coding (NC) integrated into Passive Optical Networks (PONs) is regarded as a promising solution to achieve higher throughput and energy efficiency. To efficiently support multimedia traffic under this new transmission mode, novel NC-based hybrid scheduling mechanisms for Next-generation PONs (NG-PONs) including energy management, time slot management, resource allocation, and Quality-of-Service (QoS) scheduling are proposed in this paper. First, we design an energy-saving scheme that is based on Bidirectional Centric Scheduling (BCS) to reduce the energy consumption of both the Optical Line Terminal (OLT) and Optical Network Units (ONUs). Next, we propose an intra-ONU scheduling and an inter-ONU scheduling scheme, which takes NC into account to support service differentiation and QoS assurance. The presented simulation results show that BCS achieves higher energy efficiency under low traffic loads, clearly outperforming the alternative NC-based Upstream Centric Scheduling (UCS) scheme. Furthermore, BCS is shown to provide better QoS assurance.
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
Hybrid petacomputing meets cosmology: The Roadrunner Universe project
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Lukić, Zarija; Daniel, David; Fasel, Patricia; Desai, Nehal; Heitmann, Katrin; Hsu, Chung-Hsing; Ankeny, Lee; Mark, Graham; Bhattacharya, Suman; Ahrens, James
2009-07-01
The target of the Roadrunner Universe project at Los Alamos National Laboratory is a set of very large cosmological N-body simulation runs on the hybrid supercomputer Roadrunner, the world's first petaflop platform. Roadrunner's architecture presents opportunities and difficulties characteristic of next-generation supercomputing. We describe a new code designed to optimize performance and scalability by explicitly matching the underlying algorithms to the machine architecture, and by using the physics of the problem as an essential aid in this process. While applications will differ in specific exploits, we believe that such a design process will become increasingly important in the future. The Roadrunner Universe project code, MC3 (Mesh-based Cosmology Code on the Cell), uses grid and direct particle methods to balance the capabilities of Roadrunner's conventional (Opteron) and accelerator (Cell BE) layers. Mirrored particle caches and spectral techniques are used to overcome communication bandwidth limitations and possible difficulties with complicated particle-grid interaction templates.
A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation
NASA Astrophysics Data System (ADS)
Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.
2018-03-01
We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.
Pollier, Jacob; González-Guzmán, Miguel; Ardiles-Diaz, Wilson; Geelen, Danny; Goossens, Alain
2011-01-01
cDNA-Amplified Fragment Length Polymorphism (cDNA-AFLP) is a commonly used technique for genome-wide expression analysis that does not require prior sequence knowledge. Typically, quantitative expression data and sequence information are obtained for a large number of differentially expressed gene tags. However, most of the gene tags do not correspond to full-length (FL) coding sequences, which is a prerequisite for subsequent functional analysis. A medium-throughput screening strategy, based on integration of polymerase chain reaction (PCR) and colony hybridization, was developed that allows in parallel screening of a cDNA library for FL clones corresponding to incomplete cDNAs. The method was applied to screen for the FL open reading frames of a selection of 163 cDNA-AFLP tags from three different medicinal plants, leading to the identification of 109 (67%) FL clones. Furthermore, the protocol allows for the use of multiple probes in a single hybridization event, thus significantly increasing the throughput when screening for rare transcripts. The presented strategy offers an efficient method for the conversion of incomplete expressed sequence tags (ESTs), such as cDNA-AFLP tags, to FL-coding sequences.
NASA Astrophysics Data System (ADS)
Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao
2017-10-01
UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Design of a Hybrid Propulsion System for Orbit Raising Applications
NASA Astrophysics Data System (ADS)
Boman, N.; Ford, M.
2004-10-01
A trade off between conventional liquid apogee engines used for orbit raising applications and hybrid rocket engines (HRE) has been performed using a case study approach. Current requirements for lower cost and enhanced safety places hybrid propulsion systems in the spotlight. For evaluating and design of a hybrid rocket engine a parametric engineering code is developed, based on the combustion chamber characteristics of selected propellants. A single port cylindrical section of fuel grain is considered. Polyethylene (PE) and hydroxyl-terminated polybutadiene (HTPB) represents the fuels investigated. The engine design is optimized to minimize the propulsion system volume and mass, while keeping the system as simple as possible. It is found that the fuel grain L/D ratio boundary condition has a major impact on the overall hybrid rocket engine design.
Evaluation of the Performance of the Hybrid Lattice Boltzmann Based Numerical Flux
NASA Astrophysics Data System (ADS)
Zheng, H. W.; Shu, C.
2016-06-01
It is well known that the numerical scheme is a key factor to the stability and accuracy of a Navier-Stokes solver. Recently, a new hybrid lattice Boltzmann numerical flux (HLBFS) is developed by Shu's group. It combines two different LBFS schemes by a switch function. It solves the Boltzmann equation instead of the Euler equation. In this article, the main object is to evaluate the ability of this HLBFS scheme by our in-house cell centered hybrid mesh based Navier-Stokes code. Its performance is examined by several widely-used bench-mark test cases. The comparisons on results between calculation and experiment are conducted. They show that the scheme can capture the shock wave as well as the resolving of boundary layer.
NASA Astrophysics Data System (ADS)
Datta, Jinia; Chowdhuri, Sumana; Bera, Jitendranath
2016-12-01
This paper presents a novel scheme of remote condition monitoring of multi machine system where a secured and coded data of induction machine with different parameters is communicated between a state-of-the-art dedicated hardware Units (DHU) installed at the machine terminal and a centralized PC based machine data management (MDM) software. The DHUs are built for acquisition of different parameters from the respective machines, and hence are placed at their nearby panels in order to acquire different parameters cost effectively during their running condition. The MDM software collects these data through a communication channel where all the DHUs are networked using RS485 protocol. Before transmitting, the parameter's related data is modified with the adoption of differential pulse coded modulation (DPCM) and Huffman coding technique. It is further encrypted with a private key where different keys are used for different DHUs. In this way a data security scheme is adopted during its passage through the communication channel in order to avoid any third party attack into the channel. The hybrid mode of DPCM and Huffman coding is chosen to reduce the data packet length. A MATLAB based simulation and its practical implementation using DHUs at three machine terminals (one healthy three phase, one healthy single phase and one faulty three phase machine) proves its efficacy and usefulness for condition based maintenance of multi machine system. The data at the central control room are decrypted and decoded using MDM software. In this work it is observed that Chanel efficiency with respect to different parameter measurements has been increased very much.
Comparison Of The Performance Of Hybrid Coders Under Different Configurations
NASA Astrophysics Data System (ADS)
Gunasekaran, S.; Raina J., P.
1983-10-01
Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.
A Hybrid RANS/LES Approach for Predicting Jet Noise
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid acoustic prediction methods have an important advantage over the current Reynolds averaged Navier-Stokes (RANS) based methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence. Unfortunately, they are unable to account for the high frequency sound generated by the turbulence in the initial mixing layers. This paper introduces an alternative approach that directly calculates the sound from a hybrid RANS/LES flow model (which can resolve the steep gradients in the initial mixing layers near the nozzle lip) and adopts modeling techniques similar to those used in current RANS based noise prediction methods to determine the unknown sources in the equations for the remaining unresolved components of the sound field. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid noise prediction methods.
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
Digital Plasma Control System for Alcator C-Mod
NASA Astrophysics Data System (ADS)
Ferrara, M.; Wolfe, S.; Stillerman, J.; Fredian, T.; Hutchinson, I.
2004-11-01
A digital plasma control system (DPCS) has been designed to replace the present C-Mod system, which is based on hybrid analog-digital computer. The initial implementation of DPCS comprises two 64 channel, 16 bit, low-latency cPCI digitizers, each with 16 analog outputs, controlled by a rack-mounted single-processor Linux server, which also serves as the compute engine. A prototype system employing three older 32 channel digitizers was tested during the 2003-04 campaign. The hybrid's linear PID feedback system was emulated by IDL code executing a synchronous loop, using the same target waveforms and control parameters. Reliable real-time operation was accomplished under a standard Linux OS (RH9) by locking memory and disabling interrupts during the plasma pulse. The DPCS-computed outputs agreed to within a few percent with those produced by the hybrid system, except for discrepancies due to offsets and non-ideal behavior of the hybrid circuitry. The system operated reliably, with no sample loss, at more than twice the 10kHz design specification, providing extra time for implementing more advanced control algorithms. The code is fault-tolerant and produces consistent output waveforms even with 10% sample loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.
Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less
Prediction of properties of intraply hybrid composites
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1979-01-01
Equations based on the mixtures rule are presented for predicting the physical, thermal, hygral, and mechanical properties of unidirectional intraply hybrid composites (UIHC) from the corresponding properties of their constituent composites. Bounds were derived for uniaxial longitudinal strengths, tension, compression, and flexure of UIHC. The equations predict shear and flexural properties which agree with experimental data from UIHC. Use of these equations in a composites mechanics computer code predicted flexural moduli which agree with experimental data from various intraply hybrid angleplied laminates (IHAL). It is indicated, briefly, how these equations can be used in conjunction with composite mechanics and structural analysis during the analysis/design process.
Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach. PMID:23984392
NASA Astrophysics Data System (ADS)
Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria
2017-07-01
In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.
DCT based interpolation filter for motion compensation in HEVC
NASA Astrophysics Data System (ADS)
Alshin, Alexander; Alshina, Elena; Park, Jeong Hoon; Han, Woo-Jin
2012-10-01
High Efficiency Video Coding (HEVC) draft standard has a challenging goal to improve coding efficiency twice compare to H.264/AVC. Many aspects of the traditional hybrid coding framework were improved during new standard development. Motion compensated prediction, in particular the interpolation filter, is one area that was improved significantly over H.264/AVC. This paper presents the details of the interpolation filter design of the draft HEVC standard. The coding efficiency improvements over H.264/AVC interpolation filter is studied and experimental results are presented, which show a 4.0% average bitrate reduction for Luma component and 11.3% average bitrate reduction for Chroma component. The coding efficiency gains are significant for some video sequences and can reach up 21.7%.
Bedard, Tanya; Lowry, R Brian; Sibbald, Barbara; Thomas, Mary Ann; Innes, A Micheil
2016-01-01
The use of array-based comparative genomic hybridization to assess DNA copy number is increasing in many jurisdictions. Such technology identifies more genetic causes of congenital anomalies; however, the clinical significance of some results may be challenging to interpret. A coding strategy to address cases with copy number variants has recently been implemented by the Alberta Congenital Anomalies Surveillance System and is described.
Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.
2014-01-01
The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083
NASA Astrophysics Data System (ADS)
Fernandez, Eduardo; Borelli, Noah; Cappelli, Mark; Gascon, Nicolas
2003-10-01
Most current Hall thruster simulation efforts employ either 1D (axial), or 2D (axial and radial) codes. These descriptions crucially depend on the use of an ad-hoc perpendicular electron mobility. Several models for the mobility are typically invoked: classical, Bohm, empirically based, wall-induced, as well as combinations of the above. Experimentally, it is observed that fluctuations and electron transport depend on axial distance and operating parameters. Theoretically, linear stability analyses have predicted a number of unstable modes; yet the nonlinear character of the fluctuations and/or their contribution to electron transport remains poorly understood. Motivated by these observations, a 2D code in the azimuthal and axial coordinates has been written. In particular, the simulation self-consistently calculates the azimuthal disturbances resulting in fluctuating drifts, which in turn (if properly correlated with plasma density disturbances) result in fluctuation-driven electron transport. The characterization of the turbulence at various operating parameters and across the channel length is also the object of this study. A description of the hybrid code used in the simulation as well as the initial results will be presented.
Roshani, G H; Karami, A; Khazaei, A; Olfateh, A; Nazemi, E; Omidi, M
2018-05-17
Gamma ray source has very important role in precision of multi-phase flow metering. In this study, different combination of gamma ray sources (( 133 Ba- 137 Cs), ( 133 Ba- 60 Co), ( 241 Am- 137 Cs), ( 241 Am- 60 Co), ( 133 Ba- 241 Am) and ( 60 Co- 137 Cs)) were investigated in order to optimize the three-phase flow meter. Three phases were water, oil and gas and the regime was considered annular. The required data was numerically generated using MCNP-X code which is a Monte-Carlo code. Indeed, the present study devotes to forecast the volume fractions in the annular three-phase flow, based on a multi energy metering system including various radiation sources and also one NaI detector, using a hybrid model of artificial neural network and Jaya Optimization algorithm. Since the summation of volume fractions is constant, a constraint modeling problem exists, meaning that the hybrid model must forecast only two volume fractions. Six hybrid models associated with the number of used radiation sources are designed. The models are employed to forecast the gas and water volume fractions. The next step is to train the hybrid models based on numerically obtained data. The results show that, the best forecast results are obtained for the gas and water volume fractions of the system including the ( 241 Am- 137 Cs) as the radiation source. Copyright © 2018 Elsevier Ltd. All rights reserved.
Split-gene system for hybrid wheat seed production.
Kempe, Katja; Rubtsova, Myroslava; Gils, Mario
2014-06-24
Hybrid wheat plants are superior in yield and growth characteristics compared with their homozygous parents. The commercial production of wheat hybrids is difficult because of the inbreeding nature of wheat and the lack of a practical fertility control that enforces outcrossing. We describe a hybrid wheat system that relies on the expression of a phytotoxic barnase and provides for male sterility. The barnase coding information is divided and distributed at two loci that are located on allelic positions of the host chromosome and are therefore "linked in repulsion." Functional complementation of the loci is achieved through coexpression of the barnase fragments and intein-mediated ligation of the barnase protein fragments. This system allows for growth and maintenance of male-sterile female crossing partners, whereas the hybrids are fertile. The technology does not require fertility restorers and is based solely on the genetic modification of the female crossing partner.
Split-gene system for hybrid wheat seed production
Kempe, Katja; Rubtsova, Myroslava; Gils, Mario
2014-01-01
Hybrid wheat plants are superior in yield and growth characteristics compared with their homozygous parents. The commercial production of wheat hybrids is difficult because of the inbreeding nature of wheat and the lack of a practical fertility control that enforces outcrossing. We describe a hybrid wheat system that relies on the expression of a phytotoxic barnase and provides for male sterility. The barnase coding information is divided and distributed at two loci that are located on allelic positions of the host chromosome and are therefore “linked in repulsion.” Functional complementation of the loci is achieved through coexpression of the barnase fragments and intein-mediated ligation of the barnase protein fragments. This system allows for growth and maintenance of male-sterile female crossing partners, whereas the hybrids are fertile. The technology does not require fertility restorers and is based solely on the genetic modification of the female crossing partner. PMID:24821800
Sharma, Diksha; Badano, Aldo
2013-03-01
hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Wavelet-based compression of M-FISH images.
Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R
2005-05-01
Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.
Djordjevic, Ivan B
2011-08-15
In addition to capacity, the future high-speed optical transport networks will also be constrained by energy consumption. In order to solve the capacity and energy constraints simultaneously, in this paper we propose the use of energy-efficient hybrid D-dimensional signaling (D>4) by employing all available degrees of freedom for conveyance of the information over a single carrier including amplitude, phase, polarization and orbital angular momentum (OAM). Given the fact that the OAM eigenstates, associated with the azimuthal phase dependence of the complex electric field, are orthogonal, they can be used as basis functions for multidimensional signaling. Since the information capacity is a linear function of number of dimensions, through D-dimensional signal constellations we can significantly improve the overall optical channel capacity. The energy-efficiency problem is solved, in this paper, by properly designing the D-dimensional signal constellation such that the mutual information is maximized, while taking the energy constraint into account. We demonstrate high-potential of proposed energy-efficient hybrid D-dimensional coded-modulation scheme by Monte Carlo simulations. © 2011 Optical Society of America
Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements
NASA Astrophysics Data System (ADS)
Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.
2017-02-01
Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. We report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. We demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ±0.018 , and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.
Hybrid Hard and Soft Decision Decoding of Reed-Solomon Codes for M-ary Frequency-Shift Keying
2010-06-01
Reed-Solomon (RS) coding, Orthogonal signaling, Additive White Gaussian Noise (AWGN), Pulse-Noise Interference (PNI), coherent detection, noncoherent ...Coherent Demodulation of MFSK ....................................................10 2. Noncoherent Demodulation of MFSK...62 V. PERFORMANCE SIMULATION AND ANALYSIS OF MFSK WITH RS ENCODING, HYBRID HD SD DECODING, AND NONCOHERENT DEMODULATION IN AWGN
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-04-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-09-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
CORSICA modelling of ITER hybrid operation scenarios
NASA Astrophysics Data System (ADS)
Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.
2016-12-01
The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.
Constructing LDPC Codes from Loop-Free Encoding Modules
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth
2009-01-01
A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.
NASA Astrophysics Data System (ADS)
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
NASA Astrophysics Data System (ADS)
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
How Student Teachers (Don't) Talk about Race: An Intersectional Analysis
ERIC Educational Resources Information Center
Young, Kathryn S.
2016-01-01
This study explores how student teacher talk about their students illuminates the identities ascribed to these same students. It uses a hybrid intersectional framework based on Disability Studies, Critical Race Theory, and Latino Critical Theory and methodologies (like examining majoritarian stories, counter-storytelling, coded talk, and…
Improvements of the particle-in-cell code EUTERPE for petascaling machines
NASA Astrophysics Data System (ADS)
Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Kleiber, Ralf; Castejón, Francisco; Cela, José M.
2011-09-01
In the present work we report some performance measures and computational improvements recently carried out using the gyrokinetic code EUTERPE (Jost, 2000 [1] and Jost et al., 1999 [2]), which is based on the general particle-in-cell (PIC) method. The scalability of the code has been studied for up to sixty thousand processing elements and some steps towards a complete hybridization of the code were made. As a numerical example, non-linear simulations of Ion Temperature Gradient (ITG) instabilities have been carried out in screw-pinch geometry and the results are compared with earlier works. A parametric study of the influence of variables (step size of the time integrator, number of markers, grid size) on the quality of the simulation is presented.
Xia, Yidong; Podgorney, Robert; Huang, Hai
2016-03-17
FALCON (“Fracturing And Liquid CONvection”) is a hybrid continuous / discontinuous Galerkin finite element geothermal reservoir simulation code based on the MOOSE (“Multiphysics Object-Oriented Simulation Environment”) framework being developed and used for multiphysics applications. In the present work, a suite of verification and validation (“V&V”) test problems for FALCON was defined to meet the design requirements, and solved to the interests of enhanced geothermal system (“EGS”) design. Furthermore, the intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the FALCON solution methods. The simulation problems vary in complexity from singly mechanical ormore » thermo process, to coupled thermo-hydro-mechanical processes in geological porous media. Numerical results obtained by FALCON agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these capabilities in FALCON. Some form of solution verification has been attempted to identify sensitivities in the solution methods, where possible, and suggest best practices when using the FALCON code.« less
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...
2015-12-15
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Wang, Chuanjin; Luo, Hong
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less
A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma
NASA Astrophysics Data System (ADS)
Kawazura, Y.; Barnes, M.
2018-05-01
This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.
D'Alonzo, Marco; Dosen, Strahinja; Cipriani, Christian; Farina, Dario
2014-03-01
Electro- or vibro-tactile stimulations were used in the past to provide sensory information in many different applications ranging from human manual control to prosthetics. The two modalities were used separately in the past, and we hypothesized that a hybrid vibro-electrotactile (HyVE) stimulation could provide two afferent streams that are independently perceived by a subject, although delivered in parallel and through the same skin location. We conducted psychophysical experiments where healthy subjects were asked to recognize the intensities of electroand vibro-tactile stimuli during hybrid and single modality stimulations. The results demonstrated that the subjects were able to discriminate the features of the two modalities within the hybrid stimulus, and that the cross-modality interaction was limited enough to allow better transmission of discrete information (messages) using hybrid versus singlemodality coding. The percentages of successful recognitions (mean ± standard deviation) for nine messages were 56 ± 11 % and 72 ± 8 % for two hybrid coding schemes, compared to 29 ±7 % for vibrotactile and 44 ± 4 % for electrotactile coding. The HyVE can be therefore an attractivesolution in numerous application for providing sensory feedbackin prostheses and rehabilitation, and it could be used to increase the resolution of a single variable or to simultaneously feedback two different variables.
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
Domínguez, Carmen M; Ramos, Daniel; Mingorance, Jesús; Fierro, José L G; Tamayo, Javier; Calleja, Montserrat
2018-01-02
Carbapenem-resistant Enterobacteriaceae have recently become an important cause of morbidity and mortality due to healthcare-associated infections. Most commonly used diagnostic methods are incompatible with fast and accurate directed therapy. We report here the direct identification of the bla OXA48 gene, which codes for the carbapenemase OXA-48, in lysate samples from Klebsiella pneumoniae. The method is PCR-free and label-free. It is based on the measurement of changes in the stiffness of DNA self-assembled monolayers anchored to microcantilevers that occur as a consequence of the hybridization. The stiffness of the DNA layer is measured through changes of the sensor resonance frequency upon hybridization and at varying relative humidity.
Seki, N; Muramatsu, M; Sugano, S; Suzuki, Y; Nakagawara, A; Ohhira, M; Hayashi, A; Hori, T; Saito, T
1998-01-01
Huntington disease (HD) is an inherited neurodegenerative disorder which is associated with CAG expansion in the coding region of the gene for huntingtin protein. Recently, a huntingtin interacting protein, HIP1, was isolated by the yeast two-hybrid system. Here we report the isolation of a cDNA clone for HIP1R (huntingtin interacting protein-1 related), which encodes a predicted protein product sharing a striking homology with HIP1. RT-PCR analysis showed that the messenger RNA was ubiquitously expressed in various human tissues. Based on PCR-assisted analysis of a radiation hybrid panel and fluorescence in situ hybridization, HIP1R was localized to the q24 region of chromosome 12.
NASA Astrophysics Data System (ADS)
Clark, Stephen; Winske, Dan; Schaeffer, Derek; Everson, Erik; Bondarenko, Anton; Constantin, Carmen; Niemann, Christoph
2014-10-01
We present 3D hybrid simulations of laser produced expanding debris clouds propagating though a magnetized ambient plasma in the context of magnetized collisionless shocks. New results from the 3D code are compared to previously obtained simulation results using a 2D hybrid code. The 3D code is an extension of a previously developed 2D code developed at Los Alamos National Laboratory. It has been parallelized and ported to execute on a cluster environment. The new simulations are used to verify scaling relationships, such as shock onset time and coupling parameter (Rm /ρd), developed via 2D simulations. Previous 2D results focus primarily on laboratory shock formation relevant to experiments being performed on the Large Plasma Device, where the shock propagates across the magnetic field. The new 3D simulations show wave structure and dynamics oblique to the magnetic field that introduce new physics to be considered in future experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Diksha; Badano, Aldo
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests thatmore » hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.« less
7 CFR 457.112 - Hybrid sorghum seed crop insurance provisions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... produced by crossing a male and female parent plant, each having a different genetic character. This... formula for establishing the value must be based on data provided by a public third party that establishes..., number or code assigned to a specific genetic cross by the seed company or the Special Provisions for the...
Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements
Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.
2017-02-01
Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. In this paper, we report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. Finally, we demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ± 0.018, and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Bardachenko, Vitaliy F.; Nikolsky, Alexander I.; Lazarev, Alexander A.
2007-04-01
In the paper we show that the biologically motivated conception of the use of time-pulse encoding gives the row of advantages (single methodological basis, universality, simplicity of tuning, training and programming et al) at creation and designing of sensor systems with parallel input-output and processing, 2D-structures of hybrid and neuro-fuzzy neurocomputers of next generations. We show principles of construction of programmable relational optoelectronic time-pulse coded processors, continuous logic, order logic and temporal waves processes, that lie in basis of the creation. We consider structure that executes extraction of analog signal of the set grade (order), sorting of analog and time-pulse coded variables. We offer optoelectronic realization of such base relational elements of order logic, which consists of time-pulse coded phototransformers (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutations blocks. We make estimations of basic technical parameters of such base devices and processors on their basis by simulation and experimental research: power of optical input signals - 0.200-20 μW, processing time - microseconds, supply voltage - 1.5-10 V, consumption power - hundreds of microwatts per element, extended functional possibilities, training possibilities. We discuss some aspects of possible rules and principles of training and programmable tuning on the required function, relational operation and realization of hardware blocks for modifications of such processors. We show as on the basis of such quasiuniversal hardware simple block and flexible programmable tuning it is possible to create sorting machines, neural networks and hybrid data-processing systems with the untraditional numerical systems and pictures operands.
System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann
2003-01-01
A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.
TDRSS telecommunications system, PN code analysis
NASA Technical Reports Server (NTRS)
Dixon, R.; Gold, R.; Kaiser, F.
1976-01-01
The pseudo noise (PN) codes required to support the TDRSS telecommunications services are analyzed and the impact of alternate coding techniques on the user transponder equipment, the TDRSS equipment, and all factors that contribute to the acquisition and performance of these telecommunication services is assessed. Possible alternatives to the currently proposed hybrid FH/direct sequence acquisition procedures are considered and compared relative to acquisition time, implementation complexity, operational reliability, and cost. The hybrid FH/direct sequence technique is analyzed and rejected in favor of a recommended approach which minimizes acquisition time and user transponder complexity while maximizing probability of acquisition and overall link reliability.
NASA Astrophysics Data System (ADS)
Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Borisov, Semyon P.; Shershnev, Anton A.
2017-10-01
In the present work a computer code RCFS for numerical simulation of chemically reacting compressible flows on hybrid CPU/GPU supercomputers is developed. It solves 3D unsteady Euler equations for multispecies chemically reacting flows in general curvilinear coordinates using shock-capturing TVD schemes. Time advancement is carried out using the explicit Runge-Kutta TVD schemes. Program implementation uses CUDA application programming interface to perform GPU computations. Data between GPUs is distributed via domain decomposition technique. The developed code is verified on the number of test cases including supersonic flow over a cylinder.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lazareva, Maria V.
2010-05-01
In the paper we show that the biologically motivated conception of time-pulse encoding usage gives a set of advantages (single methodological basis, universality, tuning simplicity, learning and programming et al) at creation and design of sensor systems with parallel input-output and processing for 2D structures hybrid and next generations neuro-fuzzy neurocomputers. We show design principles of programmable relational optoelectronic time-pulse encoded processors on the base of continuous logic, order logic and temporal waves processes. We consider a structure that execute analog signal extraction, analog and time-pulse coded variables sorting. We offer optoelectronic realization of such base relational order logic element, that consists of time-pulse coded photoconverters (pulse-width and pulse-phase modulators) with direct and complementary outputs, sorting network on logical elements and programmable commutation blocks. We make technical parameters estimations of devices and processors on such base elements by simulation and experimental research: optical input signals power 0.2 - 20 uW, processing time 1 - 10 us, supply voltage 1 - 3 V, consumption power 10 - 100 uW, extended functional possibilities, learning possibilities. We discuss some aspects of possible rules and principles of learning and programmable tuning on required function, relational operation and realization of hardware blocks for modifications of such processors. We show that it is possible to create sorting machines, neural networks and hybrid data-processing systems with untraditional numerical systems and pictures operands on the basis of such quasiuniversal hardware simple blocks with flexible programmable tuning.
Hybrid rendering of the chest and virtual bronchoscopy [corrected].
Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D
2000-10-30
Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
Specific and Modular Binding Code for Cytosine Recognition in Pumilio/FBF (PUF) RNA-binding Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Shuyun; Wang, Yang; Cassidy-Amstutz, Caleb
2011-10-28
Pumilio/fem-3 mRNA-binding factor (PUF) proteins possess a recognition code for bases A, U, and G, allowing designed RNA sequence specificity of their modular Pumilio (PUM) repeats. However, recognition side chains in a PUM repeat for cytosine are unknown. Here we report identification of a cytosine-recognition code by screening random amino acid combinations at conserved RNA recognition positions using a yeast three-hybrid system. This C-recognition code is specific and modular as specificity can be transferred to different positions in the RNA recognition sequence. A crystal structure of a modified PUF domain reveals specific contacts between an arginine side chain and themore » cytosine base. We applied the C-recognition code to design PUF domains that recognize targets with multiple cytosines and to generate engineered splicing factors that modulate alternative splicing. Finally, we identified a divergent yeast PUF protein, Nop9p, that may recognize natural target RNAs with cytosine. This work deepens our understanding of natural PUF protein target recognition and expands the ability to engineer PUF domains to recognize any RNA sequence.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lilienthal, P.
1997-12-01
This paper describes three different computer codes which have been written to model village power applications. The reasons which have driven the development of these codes include: the existance of limited field data; diverse applications can be modeled; models allow cost and performance comparisons; simulations generate insights into cost structures. The models which are discussed are: Hybrid2, a public code which provides detailed engineering simulations to analyze the performance of a particular configuration; HOMER - the hybrid optimization model for electric renewables - which provides economic screening for sensitivity analyses; and VIPOR the village power model - which is amore » network optimization model for comparing mini-grids to individual systems. Examples of the output of these codes are presented for specific applications.« less
Using a Hybrid Approach to Facilitate Learning Introductory Programming
ERIC Educational Resources Information Center
Cakiroglu, Unal
2013-01-01
In order to facilitate students' understanding in introductory programming courses, different types of teaching approaches were conducted. In this study, a hybrid approach including comment first coding (CFC), analogy and template approaches were used. The goal was to investigate the effect of such a hybrid approach on students' understanding in…
Kalantzis, Georgios; Tachibana, Hidenobu
2014-01-01
For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tornga, Shawn R.
The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright
NASA Astrophysics Data System (ADS)
Blum, Volker
This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.
NASA Technical Reports Server (NTRS)
Habbal, Shadia Rifai
2005-01-01
Investigations of the physical processes responsible for coronal heating and the acceleration of the solar wind were pursued with the use of our recently developed 2D MHD solar wind code and our 1D multifluid code. In particular, we explored: (1) the role of proton temperature anisotropy in the expansion of the solar (2) the role of plasma parameters at the coronal base in the formation of high (3) a three-fluid model of the slow solar wind (4) the heating of coronal loops (5) a newly developed hybrid code for the study of ion cyclotron resonance in wind, speed solar wind streams at mid-latitudes, the solar wind.
Extension of CE/SE method to non-equilibrium dissociating flows
NASA Astrophysics Data System (ADS)
Wen, C. Y.; Saldivar Massimi, H.; Shen, H.
2018-03-01
In this study, the hypersonic non-equilibrium flows over rounded nose geometries are numerically investigated by a robust conservation element and solution element (CE/SE) code, which is based on hybrid meshes consisting of triangular and quadrilateral elements. The dissociating and recombination chemical reactions as well as the vibrational energy relaxation are taken into account. The stiff source terms are solved by an implicit trapezoidal method of integration. Comparison with laboratory and numerical cases are provided to demonstrate the accuracy and reliability of the present CE/SE code in simulating hypersonic non-equilibrium flows.
Zhang, Yuqin; Lin, Fanbo; Zhang, Youyu; Li, Haitao; Zeng, Yue; Tang, Hao; Yao, Shouzhuo
2011-01-01
A new method for the detection of point mutation in DNA based on the monobase-coded cadmium tellurium nanoprobes and the quartz crystal microbalance (QCM) technique was reported. A point mutation (single-base, adenine, thymine, cytosine, and guanine, namely, A, T, C and G, mutation in DNA strand, respectively) DNA QCM sensor was fabricated by immobilizing single-base mutation DNA modified magnetic beads onto the electrode surface with an external magnetic field near the electrode. The DNA-modified magnetic beads were obtained from the biotin-avidin affinity reaction of biotinylated DNA and streptavidin-functionalized core/shell Fe(3)O(4)/Au magnetic nanoparticles, followed by a DNA hybridization reaction. Single-base coded CdTe nanoprobes (A-CdTe, T-CdTe, C-CdTe and G-CdTe, respectively) were used as the detection probes. The mutation site in DNA was distinguished by detecting the decreases of the resonance frequency of the piezoelectric quartz crystal when the coded nanoprobe was added to the test system. This proposed detection strategy for point mutation in DNA is proved to be sensitive, simple, repeatable and low-cost, consequently, it has a great potential for single nucleotide polymorphism (SNP) detection. 2011 © The Japan Society for Analytical Chemistry
SKIRT: Hybrid parallelization of radiative transfer simulations
NASA Astrophysics Data System (ADS)
Verstocken, S.; Van De Putte, D.; Camps, P.; Baes, M.
2017-07-01
We describe the design, implementation and performance of the new hybrid parallelization scheme in our Monte Carlo radiative transfer code SKIRT, which has been used extensively for modelling the continuum radiation of dusty astrophysical systems including late-type galaxies and dusty tori. The hybrid scheme combines distributed memory parallelization, using the standard Message Passing Interface (MPI) to communicate between processes, and shared memory parallelization, providing multiple execution threads within each process to avoid duplication of data structures. The synchronization between multiple threads is accomplished through atomic operations without high-level locking (also called lock-free programming). This improves the scaling behaviour of the code and substantially simplifies the implementation of the hybrid scheme. The result is an extremely flexible solution that adjusts to the number of available nodes, processors and memory, and consequently performs well on a wide variety of computing architectures.
Hypersonic simulations using open-source CFD and DSMC solvers
NASA Astrophysics Data System (ADS)
Casseau, V.; Scanlon, T. J.; John, B.; Emerson, D. R.; Brown, R. E.
2016-11-01
Hypersonic hybrid hydrodynamic-molecular gas flow solvers are required to satisfy the two essential requirements of any high-speed reacting code, these being physical accuracy and computational efficiency. The James Weir Fluids Laboratory at the University of Strathclyde is currently developing an open-source hybrid code which will eventually reconcile the direct simulation Monte-Carlo method, making use of the OpenFOAM application called dsmcFoam, and the newly coded open-source two-temperature computational fluid dynamics solver named hy2Foam. In conjunction with employing the CVDV chemistry-vibration model in hy2Foam, novel use is made of the QK rates in a CFD solver. In this paper, further testing is performed, in particular with the CFD solver, to ensure its efficacy before considering more advanced test cases. The hy2Foam and dsmcFoam codes have shown to compare reasonably well, thus providing a useful basis for other codes to compare against.
Wu, Weitai; Zhou, Ting; Aiello, Michael; Zhou, Shuiqin
2010-08-15
A new class of optical glucose nanobiosensors with high sensitivity and selectivity at physiological pH is described. To construct these glucose nanobiosensors, the fluorescent CdS quantum dots (QDs), serving as the optical code, were incorporated into the glucose-sensitive poly(N-isopropylacrylamide-acrylamide-2-acrylamidomethyl-5-fluorophenylboronic acid) copolymer microgels, via both in situ growth method and "breathing in" method, respectively. The polymeric gel can adapt to surrounding glucose concentrations, and regulate the fluorescence of the embedded QDs, converting biochemical signals into optical signals. The gradual swelling of the gel would lead to the quenching of the fluorescence at the elevated glucose concentrations. The hybrid microgels displayed high selectivity to glucose over the potential primary interferents of lactate and human serum albumin in the physiologically important glucose concentration range. The stability, reversibility, and sensitivity of the organic-inorganic hybrid microgel-based biosensors were also systematically studied. These general properties of our nanobiosensors are well tunable under appropriate tailor on the hybrid microgels, in particular, simply through the change in the crosslinking degree of the microgels. The optical glucose nanobiosensors based on the organic-inorganic hybrid microgels have shown the potential for a third generation fluorescent biosensor. Copyright 2010 Elsevier B.V. All rights reserved.
An Object-Oriented Serial DSMC Simulation Package
NASA Astrophysics Data System (ADS)
Liu, Hongli; Cai, Chunpei
2011-05-01
A newly developed three-dimensional direct simulation Monte Carlo (DSMC) simulation package, named GRASP ("Generalized Rarefied gAs Simulation Package"), is reported in this paper. This package utilizes the concept of simulation engine, many C++ features and software design patterns. The package has an open architecture which can benefit further development and maintenance of the code. In order to reduce the engineering time for three-dimensional models, a hybrid grid scheme, combined with a flexible data structure compiled by C++ language, are implemented in this package. This scheme utilizes a local data structure based on the computational cell to achieve high performance on workstation processors. This data structure allows the DSMC algorithm to be very efficiently parallelized with domain decomposition and it provides much flexibility in terms of grid types. This package can utilize traditional structured, unstructured or hybrid grids within the framework of a single code to model arbitrarily complex geometries and to simulate rarefied gas flows. Benchmark test cases indicate that this package has satisfactory accuracy for complex rarefied gas flows.
NASA Astrophysics Data System (ADS)
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Pseudospectral Model for Hybrid PIC Hall-effect Thruster Simulation
2015-07-01
and Fernandez6 (hybrid- PIC ). This work follows the example of Lam and Fernandez but substitutes a spectral description in the azimuthal direction to...Paper 3. DATES COVERED (From - To) July 2015-July 2015 4. TITLE AND SUBTITLE Pseudospectral model for hybrid PIC Hall-effect thruster simulationect...of a pseudospectral azimuthal-axial hybrid- PIC HET code which is designed to explicitly resolve and filter azimuthal fluctuations in the
Fey, G; Lewis, J B; Grodzicker, T; Bothwell, A
1979-01-01
The adenovirus type 2-simian virus 40 (SV40) hybrid virus Ad2+ND1 dp2 (E. Lukanidin, manuscript in preparation) specified two proteins (molecular weights, 24,000 and 23,000) that are, in part, products of an insertion of SV40 early DNA sequences. This was demonstrated by translation in vitro from viral mRNA that had been selected by hybridization to SV40 DNA. These two phosphorylated, nonvirion proteins were produced late in infection in amounts similar to adenovirus 2 structural proteins and were closely related to each other in tryptic peptide composition. The portion of SV40 DNA (map units 0.17 to 0.22 on the SV40 genome) coding for these proteins was joined to sequences coding for the amino-terminal part of the adenovirus type 2 structural protein IV (fiber). The Ad2+ND1 dp2 23,000- and 24,000-molecular-weight proteins were hybrid polypeptides, with about two-thirds of their tryptic peptides contributed by the fiber protein and the remainder contributed by SV40 T-antigen. They shared with T-antigen (molecular weight, 96,000) a carboxy-terminal proline-rich tryptic peptide. Together, the tryptic peptide composition of these proteins and the known SV40 DNA sequences suggested the reading frame for the translation of T-antigen. The carboxy terminus for T-anigen would then be located on the SV40 genome map next to the TAA terminator triplet at position 0.175, 910 bases away from the cleavage site of the restriction endonuclease EcoRI. Seven host range mutants from Ad2+ND1 dp2 were isolated that had lost the capacity to propagate on monkey cells. They did not induce detectable levels of the hybrid proteins. Three of these mutants had lost the SV40 DNA insertion that codes in part for these proteins. Thus, in analogy to the Ad2+ND1 30,000-molecular-weight protein, the presence of these proteins correlates with the presence of the helper function for adenovirus replication on monkey cells. Images PMID:225516
Shared Memory Parallelization of an Implicit ADI-type CFD Code
NASA Technical Reports Server (NTRS)
Hauser, Th.; Huang, P. G.
1999-01-01
A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.
Hybrid spread spectrum radio system
Smith, Stephen F.; Dress, William B.
2010-02-02
Systems and methods are described for hybrid spread spectrum radio systems. A method includes modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control an amplification circuit that provides a gain to the signal. Another method includes: modulating a signal by utilizing a subset of bits from a pseudo-random code generator to control a fast hopping frequency synthesizer; and fast frequency hopping the signal with the fast hopping frequency synthesizer, wherein multiple frequency hops occur within a single data-bit time.
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
A comparison of Manchester symbol tracking loops for block 5 applications
NASA Technical Reports Server (NTRS)
Holmes, J. K.
1991-01-01
The linearized tracking errors of three Manchester (biphase coded) symbol tracking loops are compared to determine which is appropriate for Block 5 receiver applications. The first is a nonreturn to zero (NRZ) symbol synchronizer loop operating at twice the symbol rate (NRZ x 2) so that it operates on half symbols. The second near optimally processes the mid-symbol transitions and ignores the between symbol transitions. In the third configuration, the first two approaches are combined as a hybrid to produce the best performance. Although this hybrid loop is the best at low symbol signal to noise ratios (SNRs), it has about the same performance as the NRZ x 2 loop at higher SNRs (greater than 0-dB E sub s/N sub 0). Based on this analysis, it is tentatively recommended that the hybrid loop be implemented for Manchester data in the Block 5 receiver. However, the high data rate case and the hardware implications of each implementation must be understood and analyzed before the hybrid loop is recommended unconditionally.
An update on the BQCD Hybrid Monte Carlo program
NASA Astrophysics Data System (ADS)
Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk
2018-03-01
We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.
NASA Astrophysics Data System (ADS)
Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-03-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.
Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong
2013-01-01
Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols. PMID:24193100
Chen, Huifang; Fan, Guangyu; Xie, Lei; Cui, Jun-Hong
2013-11-04
Due to the characteristics of underwater acoustic channel, media access control (MAC) protocols designed for underwater acoustic sensor networks (UWASNs) are quite different from those for terrestrial wireless sensor networks. Moreover, in a sink-oriented network with event information generation in a sensor field and message forwarding to the sink hop-by-hop, the sensors near the sink have to transmit more packets than those far from the sink, and then a funneling effect occurs, which leads to packet congestion, collisions and losses, especially in UWASNs with long propagation delays. An improved CDMA-based MAC protocol, named path-oriented code assignment (POCA) CDMA MAC (POCA-CDMA-MAC), is proposed for UWASNs in this paper. In the proposed MAC protocol, both the round-robin method and CDMA technology are adopted to make the sink receive packets from multiple paths simultaneously. Since the number of paths for information gathering is much less than that of nodes, the length of the spreading code used in the POCA-CDMA-MAC protocol is shorter greatly than that used in the CDMA-based protocols with transmitter-oriented code assignment (TOCA) or receiver-oriented code assignment (ROCA). Simulation results show that the proposed POCA-CDMA-MAC protocol achieves a higher network throughput and a lower end-to-end delay compared to other CDMA-based MAC protocols.
Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.
Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David
2017-04-12
Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.
Nanoelectronics from the bottom up.
Lu, Wei; Lieber, Charles M
2007-11-01
Electronics obtained through the bottom-up approach of molecular-level control of material composition and structure may lead to devices and fabrication strategies not possible with top-down methods. This review presents a brief summary of bottom-up and hybrid bottom-up/top-down strategies for nanoelectronics with an emphasis on memories based on the crossbar motif. First, we will discuss representative electromechanical and resistance-change memory devices based on carbon nanotube and core-shell nanowire structures, respectively. These device structures show robust switching, promising performance metrics and the potential for terabit-scale density. Second, we will review architectures being developed for circuit-level integration, hybrid crossbar/CMOS circuits and array-based systems, including experimental demonstrations of key concepts such lithography-independent, chemically coded stochastic demultipluxers. Finally, bottom-up fabrication approaches, including the opportunity for assembly of three-dimensional, vertically integrated multifunctional circuits, will be critically discussed.
Brain cDNA clone for human cholinesterase
DOE Office of Scientific and Technical Information (OSTI.GOV)
McTiernan, C.; Adkins, S.; Chatonnet, A.
1987-10-01
A cDNA library from human basal ganglia was screened with oligonucleotide probes corresponding to portions of the amino acid sequence of human serum cholinesterase. Five overlapping clones, representing 2.4 kilobases, were isolated. The sequenced cDNA contained 207 base pairs of coding sequence 5' to the amino terminus of the mature protein in which there were four ATG translation start sites in the same reading frame as the protein. Only the ATG coding for Met-(-28) lay within a favorable consensus sequence for functional initiators. There were 1722 base pairs of coding sequence corresponding to the protein found circulating in human serum.more » The amino acid sequence deduced from the cDNA exactly matched the 574 amino acid sequence of human serum cholinesterase, as previously determined by Edman degradation. Therefore, our clones represented cholinesterase rather than acetylcholinesterase. It was concluded that the amino acid sequences of cholinesterase from two different tissues, human brain and human serum, were identical. Hybridization of genomic DNA blots suggested that a single gene, or very few genes coded for cholinesterase.« less
Hybrid Parallel Contour Trees, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher; Fasel, Patricia; Carr, Hamish
A common operation in scientific visualization is to compute and render a contour of a data set. Given a function of the form f : R^d -> R, a level set is defined as an inverse image f^-1(h) for an isovalue h, and a contour is a single connected component of a level set. The Reeb graph can then be defined to be the result of contracting each contour to a single point, and is well defined for Euclidean spaces or for general manifolds. For simple domains, the graph is guaranteed to be a tree, and is called the contourmore » tree. Analysis can then be performed on the contour tree in order to identify isovalues of particular interest, based on various metrics, and render the corresponding contours, without having to know such isovalues a priori. This code is intended to be the first data-parallel algorithm for computing contour trees. Our implementation will use the portable data-parallel primitives provided by Nvidia’s Thrust library, allowing us to compile our same code for both GPUs and multi-core CPUs. Native OpenMP and purely serial versions of the code will likely also be included. It will also be extended to provide a hybrid data-parallel / distributed algorithm, allowing scaling beyond a single GPU or CPU.« less
Seemann, M D; Claussen, C D
2001-06-01
A hybrid rendering method which combines a color-coded surface rendering method and a volume rendering method is described, which enables virtual endoscopic examinations using different representation models. 14 patients with malignancies of the lung and mediastinum (n=11) and lung transplantation (n=3) underwent thin-section spiral computed tomography. The tracheobronchial system and anatomical and pathological features of the chest were segmented using an interactive threshold interval volume-growing segmentation algorithm and visualized with a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures. For the virtual endoscopy of the tracheobronchial system, a shaded-surface model without color coding, a transparent color-coded shaded-surface model and a triangle-surface model were tested and compared. The hybrid rendering technique exploit the advantages of both rendering methods, provides an excellent overview of the tracheobronchial system and allows a clear depiction of the complex spatial relationships of anatomical and pathological features. Virtual bronchoscopy with a transparent color-coded shaded-surface model allows both a simultaneous visualization of an airway, an airway lesion and mediastinal structures and a quantitative assessment of the spatial relationship between these structures, thus improving confidence in the diagnosis of endotracheal and endobronchial diseases. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images. Virtual bronchoscopy with a transparent color-coded shaded-surface model offers a practical alternative to fiberoptic bronchoscopy and is particularly promising for patients in whom fiberoptic bronchoscopy is not feasible, contraindicated or refused. Furthermore, it can be used as a complementary procedure to fiberoptic bronchoscopy in evaluating airway stenosis and guiding bronchoscopic biopsy, surgical intervention and palliative therapy and is likely to be increasingly accepted as a screening method for people with suspected endobronchial malignancy and as control examination in the aftercare of patients with malignant diseases.
Type Ia Supernova Explosions from Hybrid Carbon-Oxygen-Neon White Dwarf Progenitors
NASA Astrophysics Data System (ADS)
Willcox, Donald E.; Townsley, Dean M.; Calder, Alan C.; Denissenkov, Pavel A.; Herwig, Falk
2016-11-01
Motivated by recent results in stellar evolution that predict the existence of hybrid white dwarf (WD) stars with a C-O core inside an O-Ne shell, we simulate thermonuclear (Type Ia) supernovae from these hybrid progenitors. We use the FLASH code to perform multidimensional simulations in the deflagration-to-detonation transition (DDT) explosion paradigm. Our hybrid progenitor models were produced with the MESA stellar evolution code and include the effects of the Urca process, and we map the progenitor model to the FLASH grid. We performed a suite of DDT simulations over a range of ignition conditions consistent with the progenitor’s thermal and convective structure assuming multiple ignition points. To compare the results from these hybrid WD stars to previous results from C-O WDs, we construct a set of C-O WD models with similar properties and similarly simulate a suite of explosions. We find that despite significant variability within each suite, trends distinguishing the explosions are apparent in their {}56{Ni} yields and the kinetic properties of the ejecta. We compare our results with other recent work that studies explosions from these hybrid progenitors.
On the effect of the neutral Hydrogen density on the 26 day variations of galactic cosmic rays
NASA Astrophysics Data System (ADS)
Engelbrecht, Nicholas; Burger, Renier; Ferreira, Stefan; Hitge, Mariette
Preliminary results of a 3D, steady-state ab-initio cosmic ray modulation code are presented. This modulation code utilizes analytical expressions for the parallel and perpendicular mean free paths based on the work of Teufel and Schlickeiser (2003) and Shalchi et al. (2004), incorporating Breech et al. (2008)'s model for the 2D variance, correlation scale, and normalized cross helicity. The effects of such a model for basic turbulence quantities, coupled with a 3D model for the neutral Hydrogen density on the 26-day variations of cosmic rays, is investigated, utilizing a Schwadron-Parker hybrid heliospheric magnetic field.
Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao
2017-01-01
Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934
Analysis of SMA Hybrid Composite Structures using Commercial Codes
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Patel, Hemant D.
2004-01-01
A thermomechanical model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures has been recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilevered beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilevered beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.
NASA Astrophysics Data System (ADS)
Bach, Matthias; Lindenstruth, Volker; Philipsen, Owe; Pinke, Christopher
2013-09-01
We present an OpenCL-based Lattice QCD application using a heatbath algorithm for the pure gauge case and Wilson fermions in the twisted mass formulation. The implementation is platform independent and can be used on AMD or NVIDIA GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double precision ⁄D implementation performs at 60 GFLOPS over a wide range of lattice sizes. The hybrid Monte Carlo presented reaches a speedup of four over the reference code running on a server CPU.
Zhang, Wei-Zhuo; Xiong, Xue-Mei; Zhang, Xiu-Jie; Wan, Shi-Ming; Guan, Ning-Nan; Nie, Chun-Hong; Zhao, Bo-Wen; Hsiao, Chung-Der; Wang, Wei-Min; Gao, Ze-Xia
2016-01-01
Hybridization plays an important role in fish breeding. Bream fishes contribute a lot to aquaculture in China due to their economically valuable characteristics and the present study included five bream species, Megalobrama amblycephala, Megalobrama skolkovii, Megalobrama pellegrini, Megalobrama terminalis and Parabramis pekinensis. As maternal inheritance of mitochondrial genome (mitogenome) involves species specific regulation, we aimed to investigate in which way the inheritance of mitogenome is affected by hybridization in these fish species. With complete mitogenomes of 7 hybrid groups of bream species being firstly reported in the present study, a comparative analysis of 17 mitogenomes was conducted, including representatives of these 5 bream species, 6 first generation hybrids and 6 second generation hybrids. The results showed that these 17 mitogenomes shared the same gene arrangement, and had similar gene size and base composition. According to the phylogenetic analyses, all mitogenomes of the hybrids were consistent with a maternal inheritance. However, a certain number of variable sites were detected in all F1 hybrid groups compared to their female parents, especially in the group of M. terminalis (♀) × M. amblycephala (♂) (MT×MA), with a total of 86 variable sites between MT×MA and its female parent. Among the mitogenomes genes, the protein-coding gene nd5 displayed the highest variability. The number of variation sites was found to be related to phylogenetic relationship of the parents: the closer they are, the lower amount of variation sites their hybrids have. The second generation hybrids showed less mitogenome variation than that of first generation hybrids. The non-synonymous and synonymous substitution rates (dN/dS) were calculated between all the hybrids with their own female parents and the results indicated that most PCGs were under negative selection. PMID:27391325
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
Lathe, R
1985-05-05
Synthetic probes deduced from amino acid sequence data are widely used to detect cognate coding sequences in libraries of cloned DNA segments. The redundancy of the genetic code dictates that a choice must be made between (1) a mixture of probes reflecting all codon combinations, and (2) a single longer "optimal" probe. The second strategy is examined in detail. The frequency of sequences matching a given probe by chance alone can be determined and also the frequency of sequences closely resembling the probe and contributing to the hybridization background. Gene banks cannot be treated as random associations of the four nucleotides, and probe sequences deduced from amino acid sequence data occur more often than predicted by chance alone. Probe lengths must be increased to confer the necessary specificity. Examination of hybrids formed between unique homologous probes and their cognate targets reveals that short stretches of perfect homology occurring by chance make a significant contribution to the hybridization background. Statistical methods for improving homology are examined, taking human coding sequences as an example, and considerations of codon utilization and dinucleotide frequencies yield an overall homology of greater than 82%. Recommendations for probe design and hybridization are presented, and the choice between using multiple probes reflecting all codon possibilities and a unique optimal probe is discussed.
NASA Astrophysics Data System (ADS)
Huang, Shaowei; Baba, Ken-Ichi; Murata, Masayuki; Kitayama, Ken-Ichi
2006-12-01
In traditional lambda-based multigranularity optical networks, a lambda is always treated as the basic routing unit, resulting in low wavelength utilization. On the basis of optical code division multiplexing (OCDM) technology, a novel OCDM-based multigranularity optical cross-connect (MG-OXC) is proposed. Compared with the traditional lambda-based MG-OXC, its switching capability has been extended to support fiber switching, waveband switching, lambda switching, and OCDM switching. In a network composed of OCDM-based MG-OXCs, a single wavelength can be shared by distinct label switched paths (LSPs) called OCDM-LSPs, and OCDM-LSP switching can be implemented in the optical domain. To improve the network flexibility for an OCDM-LSP provisioning, two kinds of switches enabling hybrid optical code (OC)-wavelength conversion are designed. Simulation results indicate that a blocking probability reduction of 2 orders can be obtained by deploying only five OCs to a single wavelength. Furthermore, compared with time-division-multiplexing LSP (TDM-LSP), owing to the asynchronous accessibility and the OC conversion, OCDM-LSPs have been shown to permit a simpler switch architecture and achieve better blocking performance than TDM-LSPs.
Esipov, Roman S; Stepanenko, Vasily N; Gurevich, Alexandr I; Chupova, Larisa A; Miroshnikov, Anatoly I
2006-01-01
Chemico-enzymatic synthesis and cloning in Esherichia coli of an artificial gene coding human glucagon was performed. Recombinant plasmid containing hybrid glucagons gene and intein Ssp dnaB from Synechocestis sp. was designed. Expression of the obtained hybrid gene in E. coli, properties of the formed hybrid protein, and conditions of its autocatalytic cleavage leading to glucagon formation were studied.
Status and future plans for open source QuickPIC
NASA Astrophysics Data System (ADS)
An, Weiming; Decyk, Viktor; Mori, Warren
2017-10-01
QuickPIC is a three dimensional (3D) quasi-static particle-in-cell (PIC) code developed based on the UPIC framework. It can be used for efficiently modeling plasma based accelerator (PBA) problems. With quasi-static approximation, QuickPIC can use different time scales for calculating the beam (or laser) evolution and the plasma response, and a 3D plasma wake field can be simulated using a two-dimensional (2D) PIC code where the time variable is ξ = ct - z and z is the beam propagation direction. QuickPIC can be thousand times faster than the normal PIC code when simulating the PBA. It uses an MPI/OpenMP hybrid parallel algorithm, which can be run on either a laptop or the largest supercomputer. The open source QuickPIC is an object-oriented program with high level classes written in Fortran 2003. It can be found at https://github.com/UCLA-Plasma-Simulation-Group/QuickPIC-OpenSource.git
Antenna pattern control using impedance surfaces
NASA Technical Reports Server (NTRS)
Balanis, Constantine A.; Liu, Kefeng
1992-01-01
During this research period, we have effectively transferred existing computer codes from CRAY supercomputer to work station based systems. The work station based version of our code preserved the accuracy of the numerical computations while giving a much better turn-around time than the CRAY supercomputer. Such a task relieved us of the heavy dependence of the supercomputer account budget and made codes developed in this research project more feasible for applications. The analysis of pyramidal horns with impedance surfaces was our major focus during this research period. Three different modeling algorithms in analyzing lossy impedance surfaces were investigated and compared with measured data. Through this investigation, we discovered that a hybrid Fourier transform technique, which uses the eigen mode in the stepped waveguide section and the Fourier transformed field distributions across the stepped discontinuities for lossy impedances coating, gives a better accuracy in analyzing lossy coatings. After a further refinement of the present technique, we will perform an accurate radiation pattern synthesis in the coming reporting period.
Wilkinson, Karl A; Hine, Nicholas D M; Skylaris, Chris-Kriton
2014-11-11
We present a hybrid MPI-OpenMP implementation of Linear-Scaling Density Functional Theory within the ONETEP code. We illustrate its performance on a range of high performance computing (HPC) platforms comprising shared-memory nodes with fast interconnect. Our work has focused on applying OpenMP parallelism to the routines which dominate the computational load, attempting where possible to parallelize different loops from those already parallelized within MPI. This includes 3D FFT box operations, sparse matrix algebra operations, calculation of integrals, and Ewald summation. While the underlying numerical methods are unchanged, these developments represent significant changes to the algorithms used within ONETEP to distribute the workload across CPU cores. The new hybrid code exhibits much-improved strong scaling relative to the MPI-only code and permits calculations with a much higher ratio of cores to atoms. These developments result in a significantly shorter time to solution than was possible using MPI alone and facilitate the application of the ONETEP code to systems larger than previously feasible. We illustrate this with benchmark calculations from an amyloid fibril trimer containing 41,907 atoms. We use the code to study the mechanism of delamination of cellulose nanofibrils when undergoing sonification, a process which is controlled by a large number of interactions that collectively determine the structural properties of the fibrils. Many energy evaluations were needed for these simulations, and as these systems comprise up to 21,276 atoms this would not have been feasible without the developments described here.
Optimizing legacy molecular dynamics software with directive-based offload
NASA Astrophysics Data System (ADS)
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-10-01
Directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In this paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMPS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel® Xeon Phi™ coprocessors and NVIDIA GPUs. The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS.
Computing border bases using mutant strategies
NASA Astrophysics Data System (ADS)
Ullah, E.; Abbas Khan, S.
2014-01-01
Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.
NASA Astrophysics Data System (ADS)
Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin
2016-12-01
Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
A tactile-output paging communication system for the deaf-blind
NASA Technical Reports Server (NTRS)
Baer, J. A.
1979-01-01
A radio frequency paging communication system that has coded vibrotactile outputs suitable for use by deaf-blind people was developed. In concept, the system consists of a base station transmitting and receiving unit and many on-body transmitting and receiving units. The completed system has seven operating modes: fire alarm; time signal; repeated single character Morse code; manual Morse code; emergency aid request; operational status test; and message acknowledge. The on-body units can be addressed in three ways: all units; a group of units; or an individual unit. All the functions developed were integrated into a single package that can be worn on the user's wrist. The control portion of the on-body unit is implemented by a microcomputer. The microcomputer is packaged in a custom-designed hybrid circuit to reduce its physical size.
NASA Astrophysics Data System (ADS)
Kandouci, Chahinaz; Djebbari, Ali
2018-04-01
A new family of two-dimensional optical hybrid code which employs zero cross-correlation (ZCC) codes, constructed by the balanced incomplete block design BIBD, as both time-spreading and wavelength hopping patterns are used in this paper. The obtained codes have both off-peak autocorrelation and cross-correlation values respectively equal to zero and unity. The work in this paper is a computer experiment performed using Optisystem 9.0 software program as a simulator to determine the wavelength hopping/time spreading (WH/TS) OCDMA system performances limitations. Five system parameters were considered in this work: the optical fiber length (transmission distance), the bitrate, the chip spacing and the transmitted power. This paper shows for what sufficient system performance parameters (BER≤10-9, Q≥6) the system can stand for.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
Boltzmann Transport in Hybrid PIC HET Modeling
2015-07-01
Paper 3. DATES COVERED (From - To) July 2015-July 2015 4. TITLE AND SUBTITLE Boltzmann transport in hybrid PIC HET modeling 5a. CONTRACT NUMBER In...reproduce experimentally observed mobility trends derived from HPHall, a workhorse hybrid- PIC HET simulation code. 15. SUBJECT TERMS 16. SECURITY...CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18 . NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Justin Koo a. REPORT Unclassified b. ABSTRACT
Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2000-01-01
An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.
A Hybrid Approach To Tandem Cylinder Noise
NASA Technical Reports Server (NTRS)
Lockard, David P.
2004-01-01
Aeolian tone generation from tandem cylinders is predicted using a hybrid approach. A standard computational fluid dynamics (CFD) code is used to compute the unsteady flow around the cylinders, and the acoustics are calculated using the acoustic analogy. The CFD code is nominally second order in space and time and includes several turbulence models, but the SST k - omega model is used for most of the calculations. Significant variation is observed between laminar and turbulent cases, and with changes in the turbulence model. A two-dimensional implementation of the Ffowcs Williams-Hawkings (FW-H) equation is used to predict the far-field noise.
Fast methods to numerically integrate the Reynolds equation for gas fluid films
NASA Technical Reports Server (NTRS)
Dimofte, Florin
1992-01-01
The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.
Design of hat-stiffened composite panels loaded in axial compression
NASA Astrophysics Data System (ADS)
Paul, T. K.; Sinha, P. K.
An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.
Adaptive coding of MSS imagery. [Multi Spectral band Scanners
NASA Technical Reports Server (NTRS)
Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.
1977-01-01
A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.
Second order tensor finite element
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Fly, J.; Berry, C.; Tworzydlo, W.; Vadaketh, S.; Bass, J.
1990-01-01
The results of a research and software development effort are presented for the finite element modeling of the static and dynamic behavior of anisotropic materials, with emphasis on single crystal alloys. Various versions of two dimensional and three dimensional hybrid finite elements were implemented and compared with displacement-based elements. Both static and dynamic cases are considered. The hybrid elements developed in the project were incorporated into the SPAR finite element code. In an extension of the first phase of the project, optimization of experimental tests for anisotropic materials was addressed. In particular, the problem of calculating material properties from tensile tests and of calculating stresses from strain measurements were considered. For both cases, numerical procedures and software for the optimization of strain gauge and material axes orientation were developed.
Analysis of SMA Hybrid Composite Structures in MSC.Nastran and ABAQUS
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Patel, Hemant D.
2005-01-01
A thermoelastic constitutive model for shape memory alloy (SMA) actuators and SMA hybrid composite (SMAHC) structures was recently implemented in the commercial finite element codes MSC.Nastran and ABAQUS. The model may be easily implemented in any code that has the capability for analysis of laminated composite structures with temperature dependent material properties. The model is also relatively easy to use and requires input of only fundamental engineering properties. A brief description of the model is presented, followed by discussion of implementation and usage in the commercial codes. Results are presented from static and dynamic analysis of SMAHC beams of two types; a beam clamped at each end and a cantilever beam. Nonlinear static (post-buckling) and random response analyses are demonstrated for the first specimen. Static deflection (shape) control is demonstrated for the cantilever beam. Approaches for modeling SMAHC material systems with embedded SMA in ribbon and small round wire product forms are demonstrated and compared. The results from the commercial codes are compared to those from a research code as validation of the commercial implementations; excellent correlation is achieved in all cases.
Xu, Yun; Muhamadali, Howbeer; Sayqal, Ali; Dixon, Neil; Goodacre, Royston
2016-10-28
Partial least squares (PLS) is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R) or a classification model (PLS-DA). However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a "pure" regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.
European Union RACE program contributions to digital audiovisual communications and services
NASA Astrophysics Data System (ADS)
de Albuquerque, Augusto; van Noorden, Leon; Badique', Eric
1995-02-01
The European Union RACE (R&D in advanced communications technologies in Europe) and the future ACTS (advanced communications technologies and services) programs have been contributing and continue to contribute to world-wide developments in audio-visual services. The paper focuses on research progress in: (1) Image data compression. Several methods of image analysis leading to the use of encoders based on improved hybrid DCT-DPCM (MPEG or not), object oriented, hybrid region/waveform or knowledge-based coding methods are discussed. (2) Program production in the aspects of 3D imaging, data acquisition, virtual scene construction, pre-processing and sequence generation. (3) Interoperability and multimedia access systems. The diversity of material available and the introduction of interactive or near- interactive audio-visual services led to the development of prestandards for video-on-demand (VoD) and interworking of multimedia services storage systems and customer premises equipment.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.
1986-01-01
An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.
3D Indoor Positioning of UAVs with Spread Spectrum Ultrasound and Time-of-Flight Cameras
Aguilera, Teodoro
2017-01-01
This work proposes the use of a hybrid acoustic and optical indoor positioning system for the accurate 3D positioning of Unmanned Aerial Vehicles (UAVs). The acoustic module of this system is based on a Time-Code Division Multiple Access (T-CDMA) scheme, where the sequential emission of five spread spectrum ultrasonic codes is performed to compute the horizontal vehicle position following a 2D multilateration procedure. The optical module is based on a Time-Of-Flight (TOF) camera that provides an initial estimation for the vehicle height. A recursive algorithm programmed on an external computer is then proposed to refine the estimated position. Experimental results show that the proposed system can increase the accuracy of a solely acoustic system by 70–80% in terms of positioning mean square error. PMID:29301211
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-01
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Layer-based buffer aware rate adaptation design for SHVC video streaming
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan
2016-09-01
This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.
1993-01-01
FUNDING NUMBERS Lutzomyia Longipalpis is a Species Complex:Genetic Divergence and Interspecific Hybrid Sterility Among Three 6. AUTHOR(S) Populations...genus Lutzomyia . Between 7% and 22% of the loci studied were diagnostic for any two of the colony,-populations. Experimental hybridization between...our results to natural populations. 14. SUBJECT TERMS UES 1S. NUMBER Of PAGlE Lutzomyia longipalpis, Leishmania donovani chagasi 16. PRICE CODE 17
INHYD: Computer code for intraply hybrid composite design. A users manual
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1983-01-01
A computer program (INHYD) was developed for intraply hybrid composite design. A users manual for INHYD is presented. In INHYD embodies several composite micromechanics theories, intraply hybrid composite theories, and an integrated hygrothermomechanical theory. The INHYD can be run in both interactive and batch modes. It has considerable flexibility and capability, which the user can exercise through several options. These options are demonstrated through appropriate INHYD runs in the manual.
Application of MIMO Techniques in sky-surface wave hybrid networking sea-state radar system
NASA Astrophysics Data System (ADS)
Zhang, L.; Wu, X.; Yue, X.; Liu, J.; Li, C.
2016-12-01
The sky-surface wave hybrid networking sea-state radar system contains of the sky wave transmission stations at different sites and several surface wave radar stations. The subject comes from the national 863 High-tech Project of China. The hybrid sky-surface wave system and the HF surface wave system work simultaneously and the HF surface wave radar (HFSWR) can work in multi-static and surface-wave networking mode. Compared with the single mode radar system, this system has advantages of better detection performance at the far ranges in ocean dynamics parameters inversion. We have applied multiple-input multiple-output(MIMO) techniques in this sea-state radar system. Based on the multiple channel and non-causal transmit beam-forming techniques, the MIMO radar architecture can reduce the size of the receiving antennas and simplify antenna installation. Besides, by efficiently utilizing the system's available degrees of freedom, it can provide a feasible approach for mitigating multipath effect and Doppler-spread clutter in Over-the-horizon Radar. In this radar, slow-time phase-coded MIMO method is used. The transmitting waveforms are phase-coded in slow-time so as to be orthogonal after Doppler processing at the receiver. So the MIMO method can be easily implemented without the need to modify the receiver hardware. After the radar system design, the MIMO experiments of this system have been completed by Wuhan University during 2015 and 2016. The experiment used Wuhan multi-channel ionospheric sounding system(WMISS) as sky-wave transmitting source and three dual-frequency HFSWR developed by the Oceanography Laboratory of Wuhan University. The transmitter system located at Chongyang with five element linear equi-spaced antenna array and Wuhan with one log-periodic antenna. The RF signals are generated by synchronized, but independent digital waveform generators - providing complete flexibility in element phase and amplitude control, and waveform type and parameters. The field experimental results show the presented method is effective. The echoes are obvious and distinguishable both in co-located MIMO mode and widely distributed MIMO mode. Key words: sky-surface wave hybrid networking; sea-state radar; MIMO; phase-coded
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengod, G.; Martinez-Mir, M.I.; Vilaro, M.T.
1989-11-01
{sup 32}P-labeled oligonucleotides derived from the coding region of rat dopamine D{sub 2} receptor cDNA were used as probes to localize cells in the rat brain that contain the mRNA coding for this receptor by using in situ hybridization histochemistry. The highest level of hybridization was found in the intermediate lobe of the pituitary gland. High mRNA content was observed in the anterior lobe of the pituitary gland, the nuclei caudate-putamen and accumbens, and the olfactory tubercle. Lower levels were seen in the substantia nigra pars compacta and the ventral tegmental area, as well as in the lateral mammillary body.more » In these areas the distribution was comparable to that of the dopamine D{sub 2} receptor binding sites as visualized by autoradiography using ({sup 3}H)SDZ 205-502 as a ligand. However, in some areas such as the olfactory bulb, neocortex, hippocampus, superior colliculus, and cerebellum, D{sub 2} receptors have been visualized but no significant hybridization signal could be detected. The mRNA coding for these receptors in these areas could be contained in cells outside those brain regions, be different from the one recognized by our probes, or be present at levels below the detection limits of our procedure. The possibility of visualizing and quantifying the mRNA coding for dopamine D{sub 2} receptor at the microscopic level will yield more information about the in vivo regulation of the synthesis of these receptor and their alteration following selective lesions or drug treatments.« less
NASA Technical Reports Server (NTRS)
Lawson, Gary; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2017-01-01
In many fields, real-world applications for High Performance Computing have already been developed. For these applications to stay up-to-date, new parallel strategies must be explored to yield the best performance; however, restructuring or modifying a real-world application may be daunting depending on the size of the code. In this case, a mini-app may be employed to quickly explore such options without modifying the entire code. In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23 was measured for MPI+SMPI, but only 11 was measured for MPI+OpenMP.
Error Reduction Program. [combustor performance evaluation codes
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.
1985-01-01
The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1991-01-01
The primary methodology during the grant period has been the use of micro or meso-scale simulations to address specific questions concerning magnetospheric processes related to the aurora and substorm morphology. This approach, while useful in providing some answers, has its limitations. Many of the problems relating to the magnetosphere are inherently global and kinetic. Effort during the last year of the grant period has increasingly focused on development of a global-scale hybrid code to model the entire, coupled magnetosheath - magnetosphere - ionosphere system. In particular, numerical procedures for curvilinear coordinate generation and exactly conservative differencing schemes for hybrid codes in curvilinear coordinates have been developed. The new computer algorithms and the massively parallel computer architectures now make this global code a feasible proposition. Support provided by this project has played an important role in laying the groundwork for the eventual development or a global-scale code to model and forecast magnetospheric weather.
Studies of Planet Formation Using a Hybrid N-Body + Planetesimal Code
NASA Technical Reports Server (NTRS)
Kenyon, Scott J.
2004-01-01
The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: (1) icy planets: models for icy planet formation will demonstrate how the physical properties of debris disks - including the Kuiper Belt in our solar system - depend on initial conditions and input physics; and (2) terrestrial planets: calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash;
2002-01-01
A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Strategies for Energy Efficient Resource Management of Hybrid Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong; Supinski, Bronis de; Schulz, Martin
2013-01-01
Many scientific applications are programmed using hybrid programming models that use both message-passing and shared-memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared-memory or message-passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoptionmore » of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74% on average and up to 13.8%) with some performance gain (up to 7.5%) or negligible performance loss.« less
ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers
NASA Astrophysics Data System (ADS)
Torrent, Marc
2014-03-01
For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization algorithm, as well as the use of external optimized librairies. Part of this work has been supported by the european Prace project (PaRtnership for Advanced Computing in Europe) in the framework of its workpackage 8.
Praz, Coraline R; Menardo, Fabrizio; Robinson, Mark D; Müller, Marion C; Wicker, Thomas; Bourras, Salim; Keller, Beat
2018-01-01
Powdery mildew is an important disease of cereals. It is caused by one species, Blumeria graminis , which is divided into formae speciales each of which is highly specialized to one host. Recently, a new form capable of growing on triticale ( B.g. triticale ) has emerged through hybridization between wheat and rye mildews ( B.g. tritici and B.g. secalis , respectively). In this work, we used RNA sequencing to study the molecular basis of host adaptation in B.g. triticale . We analyzed gene expression in three B.g. tritici isolates, two B.g. secalis isolates and two B.g. triticale isolates and identified a core set of putative effector genes that are highly expressed in all formae speciales . We also found that the genes differentially expressed between isolates of the same form as well as between different formae speciales were enriched in putative effectors. Their coding genes belong to several families including some which contain known members of mildew avirulence ( Avr ) and suppressor ( Svr ) genes. Based on these findings we propose that effectors play an important role in host adaptation that is mechanistically based on Avr-Resistance gene-Svr interactions. We also found that gene expression in the B.g. triticale hybrid is mostly conserved with the parent-of-origin, but some genes inherited from B.g. tritici showed a B.g. secalis -like expression. Finally, we identified 11 unambiguous cases of putative effector genes with hybrid-specific, non-parent of origin gene expression, and we propose that they are possible determinants of host specialization in triticale mildew. These data suggest that altered expression of multiple effector genes, in particular Avr and Svr related factors, might play a role in mildew host adaptation based on hybridization.
Model-based design of RNA hybridization networks implemented in living cells
Rodrigo, Guillermo; Prakash, Satya; Shen, Shensi; Majer, Eszter
2017-01-01
Abstract Synthetic gene circuits allow the behavior of living cells to be reprogrammed, and non-coding small RNAs (sRNAs) are increasingly being used as programmable regulators of gene expression. However, sRNAs (natural or synthetic) are generally used to regulate single target genes, while complex dynamic behaviors would require networks of sRNAs regulating each other. Here, we report a strategy for implementing such networks that exploits hybridization reactions carried out exclusively by multifaceted sRNAs that are both targets of and triggers for other sRNAs. These networks are ultimately coupled to the control of gene expression. We relied on a thermodynamic model of the different stable conformational states underlying this system at the nucleotide level. To test our model, we designed five different RNA hybridization networks with a linear architecture, and we implemented them in Escherichia coli. We validated the network architecture at the molecular level by native polyacrylamide gel electrophoresis, as well as the network function at the bacterial population and single-cell levels with a fluorescent reporter. Our results suggest that it is possible to engineer complex cellular programs based on RNA from first principles. Because these networks are mainly based on physical interactions, our designs could be expanded to other organisms as portable regulatory resources or to implement biological computations. PMID:28934501
Stability properties and fast ion confinement of hybrid tokamak plasma configurations
NASA Astrophysics Data System (ADS)
Graves, J. P.; Brunetti, D.; Pfefferle, D.; Faustin, J. M. P.; Cooper, W. A.; Kleiner, A.; Lanthaler, S.; Patten, H. W.; Raghunathan, M.
2015-11-01
In hybrid scenarios with flat q just above unity, extremely fast growing tearing modes are born from toroidal sidebands of the near resonant ideal internal kink mode. New scalings of the growth rate with the magnetic Reynolds number arise from two fluid effects and sheared toroidal flow. Non-linear saturated 1/1 dominant modes obtained from initial value stability calculation agree with the amplitude of the 1/1 component of a 3D VMEC equilibrium calculation. Viable and realistic equilibrium representation of such internal kink modes allow fast ion studies to be accurately established. Calculations of MAST neutral beam ion distributions using the VENUS-LEVIS code show very good agreement of observed impaired core fast ion confinement when long lived modes occur. The 3D ICRH code SCENIC also enables the establishment of minority RF distributions in hybrid plasmas susceptible to saturated near resonant internal kink modes.
High speed corner and gap-seal computations using an LU-SGS scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.
1989-01-01
The hybrid Lower-Upper Symmetric Gauss-Seidel (LU-SGS) algorithm was added to a widely used series of 2D/3D Euler/Navier-Stokes solvers and was demonstrated for a particular class of high-speed flows. A limited study was conducted to compare the hybrid LU-SGS for approximate Newton iteration and diagonalized Beam-Warming (DBW) schemes on a work and convergence history basis. The hybrid LU-SGS algorithm is more efficient and easier to implement than the DBW scheme originally present in the code for the cases considered. The code was validated for the hypersonic flow through two mutually perpendicular flat plates and then used to investigate the flow field in and around a simplified scramjet module gap seal configuration. Due to the similarities, the gap seal flow was compared to hypersonic corner flow at the same freestream conditions and Reynolds number.
Research on stellarator-mirror fission-fusion hybrid
NASA Astrophysics Data System (ADS)
Moiseenko, V. E.; Kotenko, V. G.; Chernitskiy, S. V.; Nemov, V. V.; Ågren, O.; Noack, K.; Kalyuzhnyi, V. N.; Hagnestål, A.; Källne, J.; Voitsenya, V. S.; Garkusha, I. E.
2014-09-01
The development of a stellarator-mirror fission-fusion hybrid concept is reviewed. The hybrid comprises of a fusion neutron source and a powerful sub-critical fast fission reactor core. The aim is the transmutation of spent nuclear fuel and safe fission energy production. In its fusion part, neutrons are generated in deuterium-tritium (D-T) plasma, confined magnetically in a stellarator-type system with an embedded magnetic mirror. Based on kinetic calculations, the energy balance for such a system is analyzed. Neutron calculations have been performed with the MCNPX code, and the principal design of the reactor part is developed. Neutron outflux at different outer parts of the reactor is calculated. Numerical simulations have been performed on the structure of a magnetic field in a model of the stellarator-mirror device, and that is achieved by switching off one or two coils of toroidal field in the Uragan-2M torsatron. The calculations predict the existence of closed magnetic surfaces under certain conditions. The confinement of fast particles in such a magnetic trap is analyzed.
Beyond the Boundary: Science, Industry, and Managing Symbiosis
ERIC Educational Resources Information Center
Hansen, Birgitte Gorm
2011-01-01
Whether celebratory or critical, STS research on science-industry relations has focused on the blurring of boundaries and hybridization of codes and practices. However, the vocabulary of boundary and hybrid tends to reify science and industry as separate in the attempt to map their relation. Drawing on interviews with the head of a research center…
NASA Technical Reports Server (NTRS)
Lawson, Gary; Poteat, Michael; Sosonkina, Masha; Baurle, Robert; Hammond, Dana
2016-01-01
In this work, several mini-apps have been created to enhance a real-world application performance, namely the VULCAN code for complex flow analysis developed at the NASA Langley Research Center. These mini-apps explore hybrid parallel programming paradigms with Message Passing Interface (MPI) for distributed memory access and either Shared MPI (SMPI) or OpenMP for shared memory accesses. Performance testing shows that MPI+SMPI yields the best execution performance, while requiring the largest number of code changes. A maximum speedup of 23X was measured for MPI+SMPI, but only 10X was measured for MPI+OpenMP.
Batshon, Hussam G; Djordjevic, Ivan; Xu, Lei; Wang, Ting
2010-06-21
In this paper, we present a modified coded hybrid subcarrier/ amplitude/phase/polarization (H-SAPP) modulation scheme as a technique capable of achieving beyond 400 Gb/s single-channel transmission over optical channels. The modified H-SAPP scheme profits from the available resources in addition to geometry to increase the bandwidth efficiency of the transmission system, and so increases the aggregate rate of the system. In this report we present the modified H-SAPP scheme and focus on an example that allows 11 bits/Symbol that can achieve 440 Gb/s transmission using components of 50 Giga Symbol/s (GS/s).
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A
2016-08-12
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.
Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding
Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.
2016-01-01
With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908
Optimizing legacy molecular dynamics software with directive-based offload
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; ...
2015-05-14
The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less
Záveská, Eliška; Fér, Tomáš; Šída, Otakar; Marhold, Karol; Leong-Škorničková, Jana
2016-07-01
Discerning relationships among species evolved by reticulate and/or polyploid evolution is not an easy task, although it is widely discussed. The economically important genus Curcuma (ca. 120 spp.; Zingiberaceae), broadly distributed in tropical SE Asia, is a particularly interesting example of a group of palaeopolyploid origin whose evolution is driven mainly by hybridization and polyploidization. Although a phylogeny and a new infrageneric classification of Curcuma, based on commonly used molecular markers (ITS and cpDNA), have recently been proposed, significant evolutionary questions remain unresolved. We applied a multilocus approach and a combination of modern analytical methods to this genus to distinguish causes of gene tree incongruence and to identify hybrids and their parental species. Five independent regions of nuclear DNA (DCS, GAPDH, GLOBOSA3, LEAFY, ITS) and four non-coding cpDNA regions (trnL-trnF, trnT-trnL, psbA-trnH and matK), analysed as a single locus, were employed to construct a species tree and hybrid species trees using (*)BEAST and STEM-hy. Detection of hybridogenous species in the dataset was also conducted using the posterior predictive checking approach as implemented in JML. The resulting species tree outlines the relationships among major evolutionary lineages within Curcuma, which were previously unresolved or which conflicted depending upon whether they were based on ITS or cpDNA markers. Moreover, by using the additional markers in tests of plausible topologies of hybrid species trees for C. vamana, C. candida, C. roscoeana and C. myanmarensis suggested by previous molecular and morphological evidence, we found strong evidence that all the species except C. candida are of subgeneric hybrid origin. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.
Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. In this paper, we report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. Finally, we demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ± 0.018, and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.
Adaptive transmission based on multi-relay selection and rate-compatible LDPC codes
NASA Astrophysics Data System (ADS)
Su, Hualing; He, Yucheng; Zhou, Lin
2017-08-01
In order to adapt to the dynamical changeable channel condition and improve the transmissive reliability of the system, a cooperation system of rate-compatible low density parity check (RC-LDPC) codes combining with multi-relay selection protocol is proposed. In traditional relay selection protocol, only the channel state information (CSI) of source-relay and the CSI of relay-destination has been considered. The multi-relay selection protocol proposed by this paper takes the CSI between relays into extra account in order to obtain more chances of collabration. Additionally, the idea of hybrid automatic request retransmission (HARQ) and rate-compatible are introduced. Simulation results show that the transmissive reliability of the system can be significantly improved by the proposed protocol.
Modeling and Diagnostic Software for Liquefying-Fuel Rockets
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann
2005-01-01
A report presents a study of five modeling and diagnostic computer programs considered for use in an integrated vehicle health management (IVHM) system during testing of liquefying-fuel hybrid rocket engines in the Hybrid Combustion Facility (HCF) at NASA Ames Research Center. Three of the programs -- TEAMS, L2, and RODON -- are model-based reasoning (or diagnostic) programs. The other two programs -- ICS and IMS -- do not attempt to isolate the causes of failures but can be used for detecting faults. In the study, qualitative models (in TEAMS and L2) and quantitative models (in RODON) having varying scope and completeness were created. Each of the models captured the structure and behavior of the HCF as a physical system. It was noted that in the cases of the qualitative models, the temporal aspects of the behavior of the HCF and the abstraction of sensor data are handled outside of the models, and it is necessary to develop additional code for this purpose. A need for additional code was also noted in the case of the quantitative model, though the amount of development effort needed was found to be less than that for the qualitative models.
NASA Technical Reports Server (NTRS)
White, Jeffrey A.; Baurle, Robert A.; Fisher, Travis C.; Quinlan, Jesse R.; Black, William S.
2012-01-01
The 2nd-order upwind inviscid flux scheme implemented in the multi-block, structured grid, cell centered, finite volume, high-speed reacting flow code VULCAN has been modified to reduce numerical dissipation. This modification was motivated by the desire to improve the codes ability to perform large eddy simulations. The reduction in dissipation was accomplished through a hybridization of non-dissipative and dissipative discontinuity-capturing advection schemes that reduces numerical dissipation while maintaining the ability to capture shocks. A methodology for constructing hybrid-advection schemes that blends nondissipative fluxes consisting of linear combinations of divergence and product rule forms discretized using 4th-order symmetric operators, with dissipative, 3rd or 4th-order reconstruction based upwind flux schemes was developed and implemented. A series of benchmark problems with increasing spatial and fluid dynamical complexity were utilized to examine the ability of the candidate schemes to resolve and propagate structures typical of turbulent flow, their discontinuity capturing capability and their robustness. A realistic geometry typical of a high-speed propulsion system flowpath was computed using the most promising of the examined schemes and was compared with available experimental data to demonstrate simulation fidelity.
Aeronautical audio broadcasting via satellite
NASA Technical Reports Server (NTRS)
Tzeng, Forrest F.
1993-01-01
A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.
Microarray slide hybridization using fluorescently labeled cDNA.
Ares, Manuel
2014-01-01
Microarray hybridization is used to determine the amount and genomic origins of RNA molecules in an experimental sample. Unlabeled probe sequences for each gene or gene region are printed in an array on the surface of a slide, and fluorescently labeled cDNA derived from the RNA target is hybridized to it. This protocol describes a blocking and hybridization protocol for microarray slides. The blocking step is particular to the chemistry of "CodeLink" slides, but it serves to remind us that almost every kind of microarray has a treatment step that occurs after printing but before hybridization. We recommend making sure of the precise treatment necessary for the particular chemistry used in the slides to be hybridized because the attachment chemistries differ significantly. Hybridization is similar to northern or Southern blots, but on a much smaller scale.
NASA Astrophysics Data System (ADS)
Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix
2017-07-01
We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.
Non-coding RNAs and plant male sterility: current knowledge and future prospects.
Mishra, Ankita; Bohra, Abhishek
2018-02-01
Latest outcomes assign functional role to non-coding (nc) RNA molecules in regulatory networks that confer male sterility to plants. Male sterility in plants offers great opportunity for improving crop performance through application of hybrid technology. In this respect, cytoplasmic male sterility (CMS) and sterility induced by photoperiod (PGMS)/temperature (TGMS) have greatly facilitated development of high-yielding hybrids in crops. Participation of non-coding (nc) RNA molecules in plant reproductive development is increasingly becoming evident. Recent breakthroughs in rice definitively associate ncRNAs with PGMS and TGMS. In case of CMS, the exact mechanism through which the mitochondrial ORFs exert influence on the development of male gametophyte remains obscure in several crops. High-throughput sequencing has enabled genome-wide discovery and validation of these regulatory molecules and their target genes, describing their potential roles performed in relation to CMS. Discovery of ncRNA localized in plant mtDNA with its possible implication in CMS induction is intriguing in this respect. Still, conclusive evidences linking ncRNA with CMS phenotypes are currently unavailable, demanding complementing genetic approaches like transgenics to substantiate the preliminary findings. Here, we review the recent literature on the contribution of ncRNAs in conferring male sterility to plants, with an emphasis on microRNAs. Also, we present a perspective on improved understanding about ncRNA-mediated regulatory pathways that control male sterility in plants. A refined understanding of plant male sterility would strengthen crop hybrid industry to deliver hybrids with improved performance.
Midthun, K; Flores, J; Taniguchi, K; Urasawa, S; Kapikian, A Z; Chanock, R M
1987-01-01
Antigenic characterization of human rotaviruses by plaque reduction neutralization assay has revealed four distinct serotypes. The outer capsid protein VP7, coded for by gene 8 or 9, is a major neutralization protein; however, studies of rotaviruses derived from genetic reassortment between two strains have confirmed that another outer capsid protein, VP3, is in some cases equally important in neutralization. In this study, the genetic relatedness of the genes coding for VP7 of human rotaviruses belonging to serotypes 1 through 4 was examined by hybridization of their denatured double-stranded genomic RNAs to labeled single-stranded mRNA probes derived from human-animal rotavirus reassortants containing only the VP7 gene of their human rotavirus parent. A high degree of homology was demonstrated between the VP7 genes of strain D and other serotype 1 human rotaviruses, strain DS-1 and other serotype 2 human rotaviruses, strain P and other serotype 3 human rotaviruses, and strain ST3 and other serotype 4 human rotaviruses. Hybrid bands could not be demonstrated between the VP7 gene of D, DS-1, P, or ST3 and the corresponding gene of human rotaviruses belonging to a different serotype. RNA specimens extracted from the stools of 15 Venezuelan children hospitalized with rotavirus diarrhea were hybridized to each of the reassortant probes representing the four human serotypes. All five viruses with short RNA patterns showed homology with the DS-1 strain VP7 gene; two of these were previously adapted to tissue culture and shown to be serotype 2 strains by tissue culture neutralization. Of the remaining 10 viruses with long RNA patterns, 2 hybridized only to the D strain VP7 gene, 6 hybridized only to the P strain VP7 gene, and 2 hybridized only to the ST3 strain VP7 gene. Hybridization using single human rotavirus gene substitution reassortants as probes may provide an alternative method for identifying the VP7 serotype of field isolates that would circumvent the need for tissue culture adaptation. Images PMID:3038948
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
1989-07-01
TECHNICAL REPORT HL-89-14 VERIFICATION OF THE HYDRODYNAMIC AND Si SEDIMENT TRANSPORT HYBRID MODELING SYSTEM FOR CUMBERLAND SOUND AND I’) KINGS BAY...Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia 12 PERSONAL AUTHOR(S) Granat...Hydrodynamic results from RMA-2V were used in the numerical sediment transport code STUDH in modeling the interaction of the flow transport and
The ePLAS code for Ignition Studies
NASA Astrophysics Data System (ADS)
Faehl, R. J.; Mason, R. J.; Kirkpatrick, R. C.
2012-10-01
The ePLAS code is a multi-fluid/PIC hybrid developing self-consistent E & B-fields by the Implicit Moment Method for stable calculations of high density plasma problems with voids on the electron Courant time scale. See: http://www.researchapplicationscorp.com. Here, we outline typical applications to: 1) short pulse driven electron transport along void (or high Z) insulated wires, and 2) the 2D development of shock ignition pressure peaks with B-fields. We outline the code's recent inclusion of SESAME EOS data, a DT/DD burn capability, a new option for K-alpha imaging of modeling output, and demonstrate a foil expansion tracked with either fluid or particle ions. Also, we describe a new super-hybrid extension of our implicit solver that permits full target dynamics studies on the ion Courant scale. Finally, we will touch on the very recent application of ePLAS to possible non-local/kinetic hydro effects NIF capsules.
Input Files and Procedures for Analysis of SMA Hybrid Composite Beams in MSC.Nastran and ABAQUS
NASA Technical Reports Server (NTRS)
Turner, Travis L.; Patel, Hemant D.
2005-01-01
A thermoelastic constitutive model for shape memory alloys (SMAs) and SMA hybrid composites (SMAHCs) was recently implemented in the commercial codes MSC.Nastran and ABAQUS. The model is implemented and supported within the core of the commercial codes, so no user subroutines or external calculations are necessary. The model and resulting structural analysis has been previously demonstrated and experimentally verified for thermoelastic, vibration and acoustic, and structural shape control applications. The commercial implementations are described in related documents cited in the references, where various results are also shown that validate the commercial implementations relative to a research code. This paper is a companion to those documents in that it provides additional detail on the actual input files and solution procedures and serves as a repository for ASCII text versions of the input files necessary for duplication of the available results.
Jumping genes: Genomic ballast or powerhouse of biological diversification.
Choudhury, Rimjhim Roy; Parisod, Christian
2017-09-01
Studying hybridization has the potential to elucidate challenging questions in evolutionary biology such as the nature of adaptive genetic variation and reproductive isolation. A growing body of work highlights that the merging of divergent genomes goes beyond the reshuffling of standing variation from related species and promotes mutations (Abbott et al., ). However, to what extent such genome instability generates evolutionary significant variation remains largely elusive. In this issue of Molecular Ecology, Dennenmoser et al. () report considerable dynamics of transposable elements (TEs) in a recent invasive fish species of hybrid origin (Cottus; Figure ). It adds to the recent examples from plants to support TE-specific genome variation following hybridization. Insights from early, as well as established, hybrids are largely coherent with increased TE activity, and this fish system thus represents an inspiring opportunity to further address the possible association between genome dynamics and "rapid evolution of hybrid species." This work based on genome (re)sequencing contrasts with prior transcriptomics or PCR-based studies of TEs and illustrates how unprecedented amount of information promises a better understanding of the multiple patterns of variation across eukaryotic genomes; provided that we get the better of methodological advances. As discussed here, unbiased assessment of TE variation from genome surveys indeed remains a challenge precluding firm conclusions to be reached about the evolutionary significance of TEs. Despite methodological and conceptual developments that appear necessary to unambiguously uncover the unexplored iceberg below the known tip, the role of coding genes vs. TEs in promoting adaptation and speciation might be clarified in a not so remote future. © 2017 John Wiley & Sons Ltd.
System and method for deriving a process-based specification
NASA Technical Reports Server (NTRS)
Hinchey, Michael Gerard (Inventor); Rouff, Christopher A. (Inventor); Rash, James Larry (Inventor)
2009-01-01
A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jason D. Hales; Veena Tikare
2014-04-01
The Used Fuel Disposition (UFD) program has initiated a project to develop a hydride formation modeling tool using a hybrid Pottsphase field approach. The Potts model is incorporated in the SPPARKS code from Sandia National Laboratories. The phase field model is provided through MARMOT from Idaho National Laboratory.
Large CMOS imager using hadamard transform based multiplexing
NASA Technical Reports Server (NTRS)
Karasik, Boris S.; Wadsworth, Mark V.
2005-01-01
We have developed a concept design for a large (10k x 10k) CMOS imaging array whose elements are grouped in small subarrays with N pixels in each. The subarrays are code-division multiplexed using the Hadamard Transform (HT) based encoding. The Hadamard code improves the signal-to-noise (SNR) ratio to the reference of the read-out amplifier by a factor of N^1/2. This way of grouping pixels reduces the number of hybridization bumps by N. A single chip layout has been designed and the architecture of the imager has been developed to accommodate the HT base multiplexing into the existing CMOS technology. The imager architecture allows for a trade-off between the speed and the sensitivity. The envisioned imager would operate at a speed >100 fps with the pixel noise < 20 e-. The power dissipation would be 100 pW/pixe1. The combination of the large format, high speed, high sensitivity and low power dissipation can be very attractive for space reconnaissance applications.
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong, E-mail: yidong.xia@inl.gov; Wang, Chuanjin; Luo, Hong
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using the Hydra-TH code. -- Highlights: •We performed a comprehensive study to verify and validate the turbulence models in Hydra-TH. •Hydra-TH delivers 2nd-order grid convergence for the incompressible Navier–Stokes equations. •Hydra-TH can accurately simulate the laminar boundary layers. •Hydra-TH can accurately simulate the turbulent boundary layers with RANS turbulence models. •Hydra-TH delivers high-fidelity LES capability for simulating turbulent flows in confined space.« less
Du, Ping; Li, Hongxia; Cao, Wei
2009-07-15
A novel and sensitive sandwich electrochemical biosensor based on the amplification of magnetic microbeads and Au nanoparticles (NPs) modified with bio bar code and PbS nanoparticles was constructed in the present work. In this method, the magnetic microspheres were coated with 4 layers polyelectrolytes in order to increase carboxyl groups on the surface of the magnetic microbeads, which enhanced the amount of the capture DNA. The amino-functionalized capture DNA on the surface of magnetic microbeads hybridized with one end of target DNA, the other end of which was hybridized with signal DNA probe labelled with Au NPs on the terminus. The Au NPs were modified with bio bar code and the PbS NPs were used as a marker for identifying the target oligoncleotide. The modification of magnetic microbeads could immobilize more amino-group terminal capture DNA, and the bio bar code could increase the amount of Au NPs that combined with the target DNA. The detection of lead ions performed by anodic stripping voltammetry (ASV) technology further improved the sensitivity of the biosensor. As a result, the present DNA biosensor showed good selectivity and sensitivity by the combined amplification. Under the optimum conditions, the linear relationship with the concentration of the target DNA was ranging from 2.0 x 10(-14) M to 1.0 x 10(-12)M and a detection limit as low as 5.0 x 10(-15)M was obtained.
Adaptive non-local smoothing-based weberface for illumination-insensitive face recognition
NASA Astrophysics Data System (ADS)
Yao, Min; Zhu, Changming
2017-07-01
Compensating the illumination of a face image is an important process to achieve effective face recognition under severe illumination conditions. This paper present a novel illumination normalization method which specifically considers removing the illumination boundaries as well as reducing the regional illumination. We begin with the analysis of the commonly used reflectance model and then expatiate the hybrid usage of adaptive non-local smoothing and the local information coding based on Weber's law. The effectiveness and advantages of this combination are evidenced visually and experimentally. Results on Extended YaleB database show its better performance than several other famous methods.
USDA-ARS?s Scientific Manuscript database
The International Code of Nomenclature for algae, fungi and plants is revised every six years to incorporate decisions of the Nomenclature Section of successive International Botanical Congresses (IBC) on proposals to amend the Code. The proposal in this paper will be considered at the IBC in Shenzh...
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.
2017-01-01
A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.
Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices
NASA Astrophysics Data System (ADS)
Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando
2017-10-01
We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.
Supernova Light Curves and Spectra from Two Different Codes: Supernu and Phoenix
NASA Astrophysics Data System (ADS)
Van Rossum, Daniel R; Wollaeger, Ryan T
2014-08-01
The observed similarities between light curve shapes from Type Ia supernovae, and in particular the correlation of light curve shape and brightness, have been actively studied for more than two decades. In recent years, hydronamic simulations of white dwarf explosions have advanced greatly, and multiple mechanisms that could potentially produce Type Ia supernovae have been explored in detail. The question which of the proposed mechanisms is (or are) possibly realized in nature remains challenging to answer, but detailed synthetic light curves and spectra from explosion simulations are very helpful and important guidelines towards answering this question.We present results from a newly developed radiation transport code, Supernu. Supernu solves the supernova radiation transfer problem uses a novel technique based on a hybrid between Implicit Monte Carlo and Discrete Diffusion Monte Carlo. This technique enhances the efficiency with respect to traditional implicit monte carlo codes and thus lends itself perfectly for multi-dimensional simulations. We show direct comparisons of light curves and spectra from Type Ia simulations with Supernu versus the legacy Phoenix code.
NASA Astrophysics Data System (ADS)
Darazi, R.; Gouze, A.; Macq, B.
2009-01-01
Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.
Binary CFG Rebuilt of Self-Modifying Codes
2016-10-03
ABOVE ORGANIZATION. 1. REPORT DATE (DD-MM-YYYY) 04-10-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 12 May 2014 to 11 May 2016 4. TITLE ...industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we apply a hybrid method combining concolic testing (dynamic symbolic...virus software based on binary signatures. A popular method in industry to analyze malware is a dynamic analysis in a sand- box . Alternatively, we
Cloning of Trametes versicolar genes induced by nitrogen starvation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trudel, P.; Courchesne, D.; Roy, C.
1988-06-01
We have screened a genomic library of Trametes versicolar for genes whose expression is associated with nitrogen starvation, which has been shown to induce ligninolytic activity. Using two different approaches based on differential expression, we isolated 29 clones. These were shown by restriction mapping and cross-hybridization to code for 11 distinct differentially expressed genes. Northern analysis of the kinetics of expression of these genes revealed that at least four of them have kinetics of induction that parallel kinetics of induction of ligninolytic activity.
Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation
NASA Astrophysics Data System (ADS)
Yi, Ce
With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.
Nyachionjeka, Kumbirayi
2014-01-01
In this paper, the performance and feasibility of a hybrid wavelength division multiplexing/time division multiplexing passive optical network (WDM/TDM PON) system with 128 optical network units (ONUs) is analysed. In this system, triple play services (video, voice and data) are successfully communicated through a distance of up to 28 km. Moreover, we analysed and compared the performance of various modulation formats for different distances in the proposed hybrid WDM/TDM PON. NRZ rectangular emerged as the most appropriate modulation format for triple play transmission in the proposed hybrid PON. PMID:27382633
CPIC: a curvilinear Particle-In-Cell code for plasma-material interaction studies
NASA Astrophysics Data System (ADS)
Delzanno, G.; Camporeale, E.; Moulton, J. D.; Borovsky, J. E.; MacDonald, E.; Thomsen, M. F.
2012-12-01
We present a recently developed Particle-In-Cell (PIC) code in curvilinear geometry called CPIC (Curvilinear PIC) [1], where the standard PIC algorithm is coupled with a grid generation/adaptation strategy. Through the grid generator, which maps the physical domain to a logical domain where the grid is uniform and Cartesian, the code can simulate domains of arbitrary complexity, including the interaction of complex objects with a plasma. At present the code is electrostatic. Poisson's equation (in logical space) can be solved with either an iterative method based on the Conjugate Gradient (CG) or the Generalized Minimal Residual (GMRES) coupled with a multigrid solver used as a preconditioner, or directly with multigrid. The multigrid strategy is critical for the solver to perform optimally or nearly optimally as the dimension of the problem increases. CPIC also features a hybrid particle mover, where the computational particles are characterized by position in logical space and velocity in physical space. The advantage of a hybrid mover, as opposed to more conventional movers that move particles directly in the physical space, is that the interpolation of the particles in logical space is straightforward and computationally inexpensive, since one does not have to track the position of the particle. We will present our latest progress on the development of the code and document the code performance on standard plasma-physics tests. Then we will present the (preliminary) application of the code to a basic dynamic-charging problem, namely the charging and shielding of a spherical spacecraft in a magnetized plasma for various level of magnetization and including the pulsed emission of an electron beam from the spacecraft. The dynamical evolution of the sheath and the time-dependent current collection will be described. This study is in support of the ConnEx mission concept to use an electron beam from a magnetospheric spacecraft to trace magnetic field lines from the magnetosphere to the ionosphere [2]. [1] G.L. Delzanno, E. Camporeale, "CPIC: a new Particle-in-Cell code for plasma-material interaction studies", in preparation (2012). [2] J.E. Borovsky, D.J. McComas, M.F. Thomsen, J.L. Burch, J. Cravens, C.J. Pollock, T.E. Moore, and S.B. Mende, "Magnetosphere-Ionosphere Observatory (MIO): A multisatellite mission designed to solve the problem of what generates auroral arcs," Eos. Trans. Amer. Geophys. Union 79 (45), F744 (2000).
Araripe, Luciana O; Montenegro, Horácio; Lemos, Bernardo; Hartl, Daniel L
2010-12-14
Hybrid male sterility (HMS) is a usual outcome of hybridization between closely related animal species. It arises because interactions between alleles that are functional within one species may be disrupted in hybrids. The identification of genes leading to hybrid sterility is of great interest for understanding the evolutionary process of speciation. In the current work we used marked P-element insertions as dominant markers to efficiently locate one genetic factor causing a severe reduction in fertility in hybrid males of Drosophila simulans and D. mauritiana. Our mapping effort identified a region of 9 kb on chromosome 3, containing three complete and one partial coding sequences. Within this region, two annotated genes are suggested as candidates for the HMS factor, based on the comparative molecular characterization and public-source information. Gene Taf1 is partially contained in the region, but yet shows high polymorphism with four fixed non-synonymous substitutions between the two species. Its molecular functions involve sequence-specific DNA binding and transcription factor activity. Gene agt is a small, intronless gene, whose molecular function is annotated as methylated-DNA-protein-cysteine S-methyltransferase activity. High polymorphism and one fixed non-synonymous substitution suggest this is a fast evolving gene. The gene trees of both genes perfectly separate D. simulans and D. mauritiana into monophyletic groups. Analysis of gene expression using microarray revealed trends that were similar to those previously found in comparisons between whole-genome hybrids and parental species. The identification following confirmation of the HMS candidate gene will add another case study leading to understanding the evolutionary process of hybrid incompatibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, X.; Fok, M. C. H.; Buzulukova, N.; Perez, J. D.; Chen, L. J.
2017-12-01
The interaction between the Earth's inner and outer magnetospheric regions associated with the tail fast flows is calculated by coupling the Auburn 3-D global hybrid simulation code (ANGIE3D) to the Comprehensive Inner Magnetosphere/Ionosphere (CIMI) model. The global hybrid code solves fully kinetic equations governing the ions and a fluid model for electrons in the self-consistent electromagnetic field of the dayside and night side outer magnetosphere. In the integrated computation model, the hybrid simulation provides the CIMI model with field data in the CIMI 3-D domain and particle data at its boundary, and the transport in the inner magnetosphere is calculated by the CIMI model. By joining the two existing codes, effects of the solar wind on particle transport through the outer magnetosphere into the inner magnetosphere are investigated. Our simulation shows that fast flows and flux ropes are localized transients in the magnetotail plasma sheet and their overall structures have a dawn-dusk asymmetry. Strong perpendicular ion heating is found at the fast flow braking, which affects the earthward transport of entropy-depleted bubbles. We report on the impacts from the temperature anisotropy and non-Maxwellian ion distributions associated with the fast flows on the ring current and the convection electric field.
First-principles calculations on the four phases of BaTiO3.
Evarestov, Robert A; Bandura, Andrei V
2012-04-30
The calculations based on linear combination of atomic orbitals basis functions as implemented in CRYSTAL09 computer code have been performed for cubic, tetragonal, orthorhombic, and rhombohedral modifications of BaTiO(3) crystal. Structural and electronic properties as well as phonon frequencies were obtained using local density approximation, generalized gradient approximation, and hybrid exchange-correlation density functional theory (DFT) functionals for four stable phases of BaTiO(3). A comparison was made between the results of different DFT techniques. It is concluded that the hybrid PBE0 [J. P. Perdew, K. Burke, M. Ernzerhof, J. Chem. Phys. 1996, 105, 9982.] functional is able to predict correctly the structural stability and phonon properties both for cubic and ferroelectric phases of BaTiO(3). The comparative phonon symmetry analysis in BaTiO(3) four phases has been made basing on the site symmetry and irreducible representation indexes for the first time. Copyright © 2012 Wiley Periodicals, Inc.
Chromatic confocal microscope using hybrid aspheric diffractive lenses
NASA Astrophysics Data System (ADS)
Rayer, Mathieu; Mansfield, Daniel
2014-05-01
A chromatic confocal microscope is a single point non-contact distance measurement sensor. For three decades the vast majority of the chromatic confocal microscope use refractive-based lenses to code the measurement axis chromatically. However, such an approach is limiting the range of applications. In this paper the performance of refractive, diffractive and Hybrid aspheric diffractive are compared. Hybrid aspheric diffractive lenses combine the low geometric aberration of a diffractive lens with the high optical power of an aspheric lens. Hybrid aspheric diffractive lenses can reduce the number of elements in an imaging system significantly or create large hyper- chromatic lenses for sensing applications. In addition, diffractive lenses can improve the resolution and the dynamic range of a chromatic confocal microscope. However, to be suitable for commercial applications, the diffractive optical power must be significant. Therefore, manufacturing such lenses is a challenge. We show in this paper how a theoretical manufacturing model can demonstrate that the hybrid aspheric diffractive configuration with the best performances is achieved by step diffractive surface. The high optical quality of step diffractive surface is then demonstrated experimentally. Publisher's Note: This paper, originally published on 5/10/14, was replaced with a corrected/revised version on 5/19/14. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
GAMERA - The New Magnetospheric Code
NASA Astrophysics Data System (ADS)
Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.
2017-12-01
The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.
A hybrid approach to near-optimal launch vehicle guidance
NASA Technical Reports Server (NTRS)
Leung, Martin S. K.; Calise, Anthony J.
1992-01-01
This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.
Color-coded Live Imaging of Heterokaryon Formation and Nuclear Fusion of Hybridizing Cancer Cells.
Suetsugu, Atsushi; Matsumoto, Takuro; Hasegawa, Kosuke; Nakamura, Miki; Kunisada, Takahiro; Shimizu, Masahito; Saji, Shigetoyo; Moriwaki, Hisataka; Bouvet, Michael; Hoffman, Robert M
2016-08-01
Fusion of cancer cells has been studied for over half a century. However, the steps involved after initial fusion between cells, such as heterokaryon formation and nuclear fusion, have been difficult to observe in real time. In order to be able to visualize these steps, we have established cancer-cell sublines from the human HT-1080 fibrosarcoma, one expressing green fluorescent protein (GFP) linked to histone H2B in the nucleus and a red fluorescent protein (RFP) in the cytoplasm and the other subline expressing RFP in the nucleus (mCherry) linked to histone H2B and GFP in the cytoplasm. The two reciprocal color-coded sublines of HT-1080 cells were fused using the Sendai virus. The fused cells were cultured on plastic and observed using an Olympus FV1000 confocal microscope. Multi-nucleate (heterokaryotic) cancer cells, in addition to hybrid cancer cells with single-or multiple-fused nuclei, including fused mitotic nuclei, were observed among the fused cells. Heterokaryons with red, green, orange and yellow nuclei were observed by confocal imaging, even in single hybrid cells. The orange and yellow nuclei indicate nuclear fusion. Red and green nuclei remained unfused. Cell fusion with heterokaryon formation and subsequent nuclear fusion resulting in hybridization may be an important natural phenomenon between cancer cells that may make them more malignant. The ability to image the complex processes following cell fusion using reciprocal color-coded cancer cells will allow greater understanding of the genetic basis of malignancy. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Hybrid-PIC Computer Simulation of the Plasma and Erosion Processes in Hall Thrusters
NASA Technical Reports Server (NTRS)
Hofer, Richard R.; Katz, Ira; Mikellides, Ioannis G.; Gamero-Castano, Manuel
2010-01-01
HPHall software simulates and tracks the time-dependent evolution of the plasma and erosion processes in the discharge chamber and near-field plume of Hall thrusters. HPHall is an axisymmetric solver that employs a hybrid fluid/particle-in-cell (Hybrid-PIC) numerical approach. HPHall, originally developed by MIT in 1998, was upgraded to HPHall-2 by the Polytechnic University of Madrid in 2006. The Jet Propulsion Laboratory has continued the development of HPHall-2 through upgrades to the physical models employed in the code, and the addition of entirely new ones. Primary among these are the inclusion of a three-region electron mobility model that more accurately depicts the cross-field electron transport, and the development of an erosion sub-model that allows for the tracking of the erosion of the discharge chamber wall. The code is being developed to provide NASA science missions with a predictive tool of Hall thruster performance and lifetime that can be used to validate Hall thrusters for missions.
NASA Astrophysics Data System (ADS)
Chen, Yi; Cartmell, Matthew
2010-03-01
A specialised hybrid controller is applied to the control of a motorised space tether spin-up space coupled with an axial and a torsional oscillation phenomenon. A seven-degree-of-freedom (7-DOF) dynamic model of a motorised momentum exchange tether is used as the basis for interplanetary payload exchange in the context of control. The tether comprises a symmetrical double payload configuration, with an outrigger counter inertia and massive central facility. It is shown that including axial and torsional elasticity permits an enhanced level of performance prediction accuracy and a useful departure from the usual rigid body representations, particularly for accurate payload positioning at strategic points. A simulation with given initial condition data has been devised in a connecting programme between control code written in MATLAB and dynamics simulation code constructed within MATHEMATICA. It is shown that there is an enhanced level of spin-up control for the 7-DOF motorised momentum exchange tether system using the specialised hybrid controller.
Model-based design of RNA hybridization networks implemented in living cells.
Rodrigo, Guillermo; Prakash, Satya; Shen, Shensi; Majer, Eszter; Daròs, José-Antonio; Jaramillo, Alfonso
2017-09-19
Synthetic gene circuits allow the behavior of living cells to be reprogrammed, and non-coding small RNAs (sRNAs) are increasingly being used as programmable regulators of gene expression. However, sRNAs (natural or synthetic) are generally used to regulate single target genes, while complex dynamic behaviors would require networks of sRNAs regulating each other. Here, we report a strategy for implementing such networks that exploits hybridization reactions carried out exclusively by multifaceted sRNAs that are both targets of and triggers for other sRNAs. These networks are ultimately coupled to the control of gene expression. We relied on a thermodynamic model of the different stable conformational states underlying this system at the nucleotide level. To test our model, we designed five different RNA hybridization networks with a linear architecture, and we implemented them in Escherichia coli. We validated the network architecture at the molecular level by native polyacrylamide gel electrophoresis, as well as the network function at the bacterial population and single-cell levels with a fluorescent reporter. Our results suggest that it is possible to engineer complex cellular programs based on RNA from first principles. Because these networks are mainly based on physical interactions, our designs could be expanded to other organisms as portable regulatory resources or to implement biological computations. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wan-Lin
2003-01-01
This paper introduces JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint reasoner with a run- time software environment. Attachments in JNET are constraints over arbitrary Java objects, which are defined using Java code, at runtime, with no changes to the JNET source code.
MEASUREMENTS OF NEUTRON SPECTRA IN 0.8-GEV AND 1.6-GEV PROTON-IRRADIATED<2 OF 2>NA THICK TARGETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Titarenko, Y. E.; Batyaev, V. F.; Zhivun, V. M.
2001-01-01
Measurements of neutron spectra in W, and Na targets irradiated by 0.8 GeV and 1.6 GeV protons are presented. Measurements were made by the TOF techniques using the proton beam from ITEP U-10 synchrotron. Neutrons were detected with BICRON-511 liquid scintillator-based detectors. The neutron detection efficiency was calculated via the SCINFUL and CECIL codes. The W results are compared with the similar data obtained elsewhere. The measured neutron spectra are compared with the LAHET and CEM2k code simulations results. Attempt is made to explain some observed disagreements between experiments and simulations. The presented results are of interest both in termsmore » of nuclear data buildup and as a benchmark of the up-to-date predictive power of the simulation codes used in designing the hybrid accelerator-driven system (ADS) facilities with sodium-cooled tungsten targets.« less
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
Chung, Kuo-Liang; Hsu, Tsu-Chun; Huang, Chi-Chao
2017-10-01
In this paper, we propose a novel and effective hybrid method, which joins the conventional chroma subsampling and the distortion-minimization-based luma modification together, to improve the quality of the reconstructed RGB full-color image. Assume the input RGB full-color image has been transformed to a YUV image, prior to compression. For each 2×2 UV block, one 4:2:0 subsampling is applied to determine the one subsampled U and V components, U s and V s . Based on U s , V s , and the corresponding 2×2 original RGB block, a main theorem is provided to determine the ideally modified 2×2 luma block in constant time such that the color peak signal-to-noise ratio (CPSNR) quality distortion between the original 2×2 RGB block and the reconstructed 2×2 RGB block can be minimized in a globally optimal sense. Furthermore, the proposed hybrid method and the delivered theorem are adjusted to tackle the digital time delay integration images and the Bayer mosaic images whose Bayer CFA structure has been widely used in modern commercial digital cameras. Based on the IMAX, Kodak, and screen content test image sets, the experimental results demonstrate that in high efficiency video coding, the proposed hybrid method has substantial quality improvement, in terms of the CPSNR quality, visual effect, CPSNR-bitrate trade-off, and Bjøntegaard delta PSNR performance, of the reconstructed RGB images when compared with existing chroma subsampling schemes.
Optimal lightpath placement on a metropolitan-area network linked with optical CDMA local nets
NASA Astrophysics Data System (ADS)
Wang, Yih-Fuh; Huang, Jen-Fa
2008-01-01
A flexible optical metropolitan-area network (OMAN) [J.F. Huang, Y.F. Wang, C.Y. Yeh, Optimal configuration of OCDMA-based MAN with multimedia services, in: 23rd Biennial Symposium on Communications, Queen's University, Kingston, Canada, May 29-June 2, 2006, pp. 144-148] structured with OCDMA linkage is proposed to support multimedia services with multi-rate or various qualities of service. To prioritize transmissions in OCDMA, the orthogonal variable spreading factor (OVSF) codes widely used in wireless CDMA are adopted. In addition, for feasible multiplexing, unipolar OCDMA modulation [L. Nguyen, B. Aazhang, J.F. Young, All-optical CDMA with bipolar codes, IEEE Electron. Lett. 31 (6) (1995) 469-470] is used to generate the code selector of multi-rate OMAN, and a flexible fiber-grating-based system is used for the equipment on OCDMA-OVSF code. These enable an OMAN to assign suitable OVSF codes when creating different-rate lightpaths. How to optimally configure a multi-rate OMAN is a challenge because of displaced lightpaths. In this paper, a genetically modified genetic algorithm (GMGA) [L.R. Chen, Flexible fiber Bragg grating encoder/decoder for hybrid wavelength-time optical CDMA, IEEE Photon. Technol. Lett. 13 (11) (2001) 1233-1235] is used to preplan lightpaths in order to optimally configure an OMAN. To evaluate the performance of the GMGA, we compared it with different preplanning optimization algorithms. Simulation results revealed that the GMGA very efficiently solved the problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eylenceoğlu, E.; Rafatov, I., E-mail: rafatov@metu.edu.tr; Kudryavtsev, A. A.
2015-01-15
Two-dimensional hybrid Monte Carlo–fluid numerical code is developed and applied to model the dc glow discharge. The model is based on the separation of electrons into two parts: the low energetic (slow) and high energetic (fast) electron groups. Ions and slow electrons are described within the fluid model using the drift-diffusion approximation for particle fluxes. Fast electrons, represented by suitable number of super particles emitted from the cathode, are responsible for ionization processes in the discharge volume, which are simulated by the Monte Carlo collision method. Electrostatic field is obtained from the solution of Poisson equation. The test calculations weremore » carried out for an argon plasma. Main properties of the glow discharge are considered. Current-voltage curves, electric field reversal phenomenon, and the vortex current formation are developed and discussed. The results are compared to those obtained from the simple and extended fluid models. Contrary to reports in the literature, the analysis does not reveal significant advantages of existing hybrid methods over the extended fluid model.« less
Bichutskiy, Vadim Y.; Colman, Richard; Brachmann, Rainer K.; Lathrop, Richard H.
2006-01-01
Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB) was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.) PMID:19458771
Plans for wind energy system simulation
NASA Technical Reports Server (NTRS)
Dreier, M. E.
1978-01-01
A digital computer code and a special purpose hybrid computer, were introduced. The digital computer program, the Root Perturbation Method or RPM, is an implementation of the classic floquet procedure which circumvents numerical problems associated with the extraction of Floquet roots. The hybrid computer, the Wind Energy System Time domain simulator (WEST), yields real time loads and deformation information essential to design and system stability investigations.
Hybrid codes with finite electron mass
NASA Astrophysics Data System (ADS)
Lipatov, A. S.
This report is devoted to the current status of the hybrid multiscale simulation technique. The different aspects of modeling are discussed. In particular, we consider the different level for description of the plasma model, however, the main attention will be paid to conventional hybrid models. We discuss the main steps of time integration the Vlasov/Maxwell system of equations. The main attention will be paid to the models with finite electron mass. Such model may allow us to explore the plasma system with multiscale phenomena ranging from ion to electron scales. As an application of hybrid modeling technique we consider the simulation of the plasma processes at the collisionless shocks and very shortly ther magnetic field reconnection processes.
Holland, M J; Holland, J P; Thill, G P; Jackson, K A
1981-02-10
Segments of yeast genomic DNA containing two enolase structural genes have been isolated by subculture cloning procedures using a cDNA hybridization probe synthesized from purified yeast enolase mRNA. Based on restriction endonuclease and transcriptional maps of these two segments of yeast DNA, each hybrid plasmid contains a region of extensive nucleotide sequence homology which forms hybrids with the cDNA probe. The DNA sequences which flank this homologous region in the two hybrid plasmids are nonhomologous indicating that these sequences are nontandemly repeated in the yeast genome. The complete nucleotide sequence of the coding as well as the flanking noncoding regions of these genes has been determined. The amino acid sequence predicted from one reading frame of both structural genes is extremely similar to that determined for yeast enolase (Chin, C. C. Q., Brewer, J. M., Eckard, E., and Wold, F. (1981) J. Biol. Chem. 256, 1370-1376), confirming that these isolated structural genes encode yeast enolase. The nucleotide sequences of the coding regions of the genes are approximately 95% homologous, and neither gene contains an intervening sequence. Codon utilization in the enolase genes follows the same biased pattern previously described for two yeast glyceraldehyde-3-phosphate dehydrogenase structural genes (Holland, J. P., and Holland, M. J. (1980) J. Biol. Chem. 255, 2596-2605). DNA blotting analysis confirmed that the isolated segments of yeast DNA are colinear with yeast genomic DNA and that there are two nontandemly repeated enolase genes per haploid yeast genome. The noncoding portions of the two enolase genes adjacent to the initiation and termination codons are approximately 70% homologous and contain sequences thought to be involved in the synthesis and processing messenger RNA. Finally there are regions of extensive homology between the two enolase structural genes and two yeast glyceraldehyde-3-phosphate dehydrogenase structural genes within the 5- noncoding portions of these glycolytic genes.
Simulation Studies for Inspection of the Benchmark Test with PATRASH
NASA Astrophysics Data System (ADS)
Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.
2002-12-01
In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.
2016-03-01
in this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. Citation of...gravity, or pretest . 1 Approved for public release; distribution is unlimited. Fine Location 2 Code position 9–10: This substring represents the spacial...itself. For example, upper, pretest , or Hybrid III mid-sized male ATD. Physical dimension Code position 13–14: This substring represents the type of the
ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noble, Charles R.; Anderson, Andrew T.; Barton, Nathan R.
ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.
PENTACLE: Parallelized particle-particle particle-tree code for planet formation
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori
2017-10-01
We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.
NASA Astrophysics Data System (ADS)
Marx, Alain; Lütjens, Hinrich
2017-03-01
A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.
Energy spectrum of 208Pb(n,x) reactions
NASA Astrophysics Data System (ADS)
Tel, E.; Kavun, Y.; Özdoǧan, H.; Kaplan, A.
2018-02-01
Fission and fusion reactor technologies have been investigated since 1950's on the world. For reactor technology, fission and fusion reaction investigations are play important role for improve new generation technologies. Especially, neutron reaction studies have an important place in the development of nuclear materials. So neutron effects on materials should study as theoretically and experimentally for improve reactor design. For this reason, Nuclear reaction codes are very useful tools when experimental data are unavailable. For such circumstances scientists created many nuclear reaction codes such as ALICE/ASH, CEM95, PCROSS, TALYS, GEANT, FLUKA. In this study we used ALICE/ASH, PCROSS and CEM95 codes for energy spectrum calculation of outgoing particles from Pb bombardment by neutron. While Weisskopf-Ewing model has been used for the equilibrium process in the calculations, full exciton, hybrid and geometry dependent hybrid nuclear reaction models have been used for the pre-equilibrium process. The calculated results have been discussed and compared with the experimental data taken from EXFOR.
A Two-Stage Procedure Toward the Efficient Implementation of PANS and Other Hybrid Turbulence Models
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Girimaji, Sharath S.
2004-01-01
The main objective of this article is to introduce and to show the implementation of a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for Partial Averaged Navier-Stokes (PANS) and other hybrid models. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The first step is to solve the unsteady or steady Reynolds Averaged Navier-Stokes (URANS/RANS) equations. From this preprocessing step, the turbulence length-scale field is obtained. This is then used to compute the characteristic length-scale ratio between the turbulence scale and the grid spacing. Based on this ratio, we can assess the finest scale resolution that a given grid for a given flow can support. Along with other additional criteria, we are able to analytically identify the appropriate hybrid solver resolution for different regions of the flow. This procedure removes the grid dependency issue that affects the results produced by different hybrid procedures in solving unsteady flows. The formulation, implementation methodology, and validation example are presented. We implemented this capability in a production Computational Fluid Dynamics (CFD) code, PAB3D, for the simulation of unsteady flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Warren
The UCLA Plasma Simulation Group is a major partner of the “Community Petascale Project for Accelerator Science and Simulation”. This is the final technical report. We include an overall summary, a list of publications, progress for the most recent year, and individual progress reports for each year. We have made tremendous progress during the three years. SciDAC funds have contributed to the development of a large number of skeleton codes that illustrate how to write PIC codes with a hierarchy of parallelism. These codes cover 2D and 3D as well as electrostatic solvers (which are used in beam dynamics codesmore » and quasi-static codes) and electromagnetic solvers (which are used in plasma based accelerator codes). We also used these ideas to develop a GPU enabled version of OSIRIS. SciDAC funds were also contributed to the development of strategies to eliminate the Numerical Cerenkov Instability (NCI) which is an issue when carrying laser wakefield accelerator (LWFA) simulations in a boosted frame and when quantifying the emittance and energy spread of self-injected electron beams. This work included the development of a new code called UPIC-EMMA which is an FFT based electromagnetic PIC code and to new hybrid algorithms in OSIRIS. A new hybrid (PIC in r-z and gridless in φ) algorithm was implemented into OSIRIS. In this algorithm the fields and current are expanded into azimuthal harmonics and the complex amplitude for each harmonic is calculated separately. The contributions from each harmonic are summed and then used to push the particles. This algorithm permits modeling plasma based acceleration with some 3D effects but with the computational load of an 2D r-z PIC code. We developed a rigorously charge conserving current deposit for this algorithm. Very recently, we made progress in combining the speed up from the quasi-3D algorithm with that from the Lorentz boosted frame. SciDAC funds also contributed to the improvement and speed up of the quasi-static PIC code QuickPIC. We have also used our suite of PIC codes to make scientific discovery. Highlights include supporting FACET experiments which achieved the milestones of showing high beam loading and energy transfer efficiency from a drive electron beam to a witness electron beam and the discovery of a self-loading regime a for high gradient acceleration of a positron beam. Both of these experimental milestones were published in Nature together with supporting QuickPIC simulation results. Simulation results from QuickPIC were used on the cover of Nature in one case. We are also making progress on using highly resolved QuickPIC simulations to show that ion motion may not lead to catastrophic emittance growth for tightly focused electron bunches loaded into nonlinear wakefields. This could mean that fully self-consistent beam loading scenarios are possible. This work remains in progress. OSIRIS simulations were used to discover how 200 MeV electron rings are formed in LWFA experiments, on how to generate electrons that have a series of bunches on nanometer scale, and how to transport electron beams from (into) plasma sections into (from) conventional beam optic sections.« less
Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows
NASA Astrophysics Data System (ADS)
Xiao, Xudong
Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.
Praz, Coraline R.; Menardo, Fabrizio; Robinson, Mark D.; Müller, Marion C.; Wicker, Thomas; Bourras, Salim; Keller, Beat
2018-01-01
Powdery mildew is an important disease of cereals. It is caused by one species, Blumeria graminis, which is divided into formae speciales each of which is highly specialized to one host. Recently, a new form capable of growing on triticale (B.g. triticale) has emerged through hybridization between wheat and rye mildews (B.g. tritici and B.g. secalis, respectively). In this work, we used RNA sequencing to study the molecular basis of host adaptation in B.g. triticale. We analyzed gene expression in three B.g. tritici isolates, two B.g. secalis isolates and two B.g. triticale isolates and identified a core set of putative effector genes that are highly expressed in all formae speciales. We also found that the genes differentially expressed between isolates of the same form as well as between different formae speciales were enriched in putative effectors. Their coding genes belong to several families including some which contain known members of mildew avirulence (Avr) and suppressor (Svr) genes. Based on these findings we propose that effectors play an important role in host adaptation that is mechanistically based on Avr-Resistance gene-Svr interactions. We also found that gene expression in the B.g. triticale hybrid is mostly conserved with the parent-of-origin, but some genes inherited from B.g. tritici showed a B.g. secalis-like expression. Finally, we identified 11 unambiguous cases of putative effector genes with hybrid-specific, non-parent of origin gene expression, and we propose that they are possible determinants of host specialization in triticale mildew. These data suggest that altered expression of multiple effector genes, in particular Avr and Svr related factors, might play a role in mildew host adaptation based on hybridization. PMID:29441081
ERIC Educational Resources Information Center
Henning, Elizabeth
2012-01-01
From the field of developmental psycholinguistics and from conceptual development theory there is evidence that excessive linguistic "code-switching" in early school education may pose some hazards for the learning of young multilingual children. In this article the author addresses the issue, invoking post-Piagetian and neo-Vygotskian…
1978-07-01
l l) A paper t i t led “Part icle-Fluid Hybrid Codes Applied to Beam- Plasma , Ring -Plasma Instabi l i ties ” was presented at Monterey (see Section V...ic le-Fluid Hybr id Codes Applied to Beam- Plasma , Ring -Plasma Ins tab i l i t ies”. (2) A. Peiravi and C. K. Birdsall , “Self-Heating of id Therma l
NASA Technical Reports Server (NTRS)
Spradley, L.; Pearson, M.
1979-01-01
The General Interpolants Method (GIM), a three dimensional, time dependent, hybrid procedure for generating numerical analogs of the conversion laws, is described. The Navier-Stokes equations written for an Eulerian system are considered. The conversion of the GIM code to the STAR-100 computer, and the implementation of 'GIM-ON-STAR' is discussed.
Automatic-repeat-request error control schemes
NASA Technical Reports Server (NTRS)
Lin, S.; Costello, D. J., Jr.; Miller, M. J.
1983-01-01
Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.
A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromley, Benjamin C.; Kenyon, Scott J., E-mail: bromley@physics.utah.edu, E-mail: skenyon@cfa.harvard.edu
2011-04-20
We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in amore » swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M{sub sun} disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller {alpha} form more massive planets than disks with larger {alpha}. For Jupiter-mass planets, masses of solid cores are 10-100 M{sub +}.« less
Studies of Planet Formation using a Hybrid N-body + Planetesimal Code
NASA Technical Reports Server (NTRS)
Kenyon, Scott J.; Bromley, Benjamin C.; Salamon, Michael (Technical Monitor)
2005-01-01
The goal of our proposal was to use a hybrid multi-annulus planetesimal/n-body code to examine the planetesimal theory, one of the two main theories of planet formation. We developed this code to follow the evolution of numerous 1 m to 1 km planetesimals as they collide, merge, and grow into full-fledged planets. Our goal was to apply the code to several well-posed, topical problems in planet formation and to derive observational consequences of the models. We planned to construct detailed models to address two fundamental issues: 1) icy planets - models for icy planet formation will demonstrate how the physical properties of debris disks, including the Kuiper Belt in our solar system, depend on initial conditions and input physics; and 2) terrestrial planets - calculations following the evolution of 1-10 km planetesimals into Earth-mass planets and rings of dust will provide a better understanding of how terrestrial planets form and interact with their environment. During the past year, we made progress on each issue. Papers published in 2004 are summarized. Summaries of work to be completed during the first half of 2005 and work planned for the second half of 2005 are included.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Benchmarking gyrokinetic simulations in a toroidal flux-tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.; Parker, S. E.; Wan, W.
2013-09-15
A flux-tube model is implemented in the global turbulence code GEM [Y. Chen and S. E. Parker, J. Comput. Phys. 220, 839 (2007)] in order to facilitate benchmarking with Eulerian codes. The global GEM assumes the magnetic equilibrium to be completely given. The initial flux-tube implementation simply selects a radial location as the center of the flux-tube and a radial size of the flux-tube, sets all equilibrium quantities (B, ∇B, etc.) to be equal to the values at the center of the flux-tube, and retains only a linear radial profile of the safety factor needed for boundary conditions. This implementationmore » shows disagreement with Eulerian codes in linear simulations. An alternative flux-tube model based on a complete local equilibrium solution of the Grad-Shafranov equation [J. Candy, Plasma Phys. Controlled Fusion 51, 105009 (2009)] is then implemented. This results in better agreement between Eulerian codes and the particle-in-cell (PIC) method. The PIC algorithm based on the v{sub ||}-formalism [J. Reynders, Ph.D. dissertation, Princeton University, 1992] and the gyrokinetic ion/fluid electron hybrid model with kinetic electron closure [Y. Chan and S. E. Parker, Phys. Plasmas 18, 055703 (2011)] are also implemented in the flux-tube geometry and compared with the direct method for both the ion temperature gradient driven modes and the kinetic ballooning modes.« less
mdFoam+: Advanced molecular dynamics in OpenFOAM
NASA Astrophysics Data System (ADS)
Longshaw, S. M.; Borg, M. K.; Ramisetti, S. B.; Zhang, J.; Lockerby, D. A.; Emerson, D. R.; Reese, J. M.
2018-03-01
This paper introduces mdFoam+, which is an MPI parallelised molecular dynamics (MD) solver implemented entirely within the OpenFOAM software framework. It is open-source and released under the same GNU General Public License (GPL) as OpenFOAM. The source code is released as a publicly open software repository that includes detailed documentation and tutorial cases. Since mdFoam+ is designed entirely within the OpenFOAM C++ object-oriented framework, it inherits a number of key features. The code is designed for extensibility and flexibility, so it is aimed first and foremost as an MD research tool, in which new models and test cases can be developed and tested rapidly. Implementing mdFoam+ in OpenFOAM also enables easier development of hybrid methods that couple MD with continuum-based solvers. Setting up MD cases follows the standard OpenFOAM format, as mdFoam+ also relies upon the OpenFOAM dictionary-based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of an MD simulation is not typical of most OpenFOAM applications. Results show that mdFoam+ compares well to another well-known MD code (e.g. LAMMPS) in terms of benchmark problems, although it also has additional functionality that does not exist in other open-source MD codes.
Primary proton and helium spectra around the knee observed by the Tibet air-shower experiment
NASA Astrophysics Data System (ADS)
Jing, Huang; Tibet ASγ Collaboration
A hybrid experiment was carried out to study the cosmic-ray primary composition in the 'knee' energy region. The experimental set-up consists of the Tibet-II air shower array( AS ), the emulsion chamber ( EC ) and the burst detector ( BD ) which are operated simulteneously and provides us information on the primary species. The experiment was carried out at Yangbajing (4,300 m a.s.l., 606 g/cm2) in Tibet during the period from 1996 through 1999. We have already reported the primary proton flux around the knee region based on the simulation code COSMOS. In this paper, we present the primary proton and helium spectra around the knee region. We also extensively examine the simulation codes COSMOS ad-hoc and CORSIKA with interaction models of QGSJET01, DPMJET 2.55, SIBYLL 2.1, VENUS 4.125, HDPM, and NEXUS 2. Based on these calculations, we briefly discuss on the systematic errors involved in our experimental results due to the Monte Carlo simulation.
The structure of the human interferon alpha/beta receptor gene.
Lutfalla, G; Gardiner, K; Proudhon, D; Vielh, E; Uzé, G
1992-02-05
Using the cDNA coding for the human interferon alpha/beta receptor (IFNAR), the IFNAR gene has been physically mapped relative to the other loci of the chromosome 21q22.1 region. 32,906 base pairs covering the IFNAR gene have been cloned and sequenced. Primer extension and solution hybridization-ribonuclease protection have been used to determine that the transcription of the gene is initiated in a broad region of 20 base pairs. Some aspects of the polymorphism of the gene, including noncoding sequences, have been analyzed; some are allelic differences in the coding sequence that induce amino acid variations in the resulting protein. The exon structure of the IFNAR gene and of that of the available genes for the receptors of the cytokine/growth hormone/prolactin/interferon receptor family have been compared with the predictions for the secondary structure of those receptors. From this analysis, we postulate a common origin and propose an hypothesis for the divergence from the immunoglobulin superfamily.
The FLUKA code for space applications: recent developments
NASA Technical Reports Server (NTRS)
Andersen, V.; Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.;
2004-01-01
The FLUKA Monte Carlo transport code is widely used for fundamental research, radioprotection and dosimetry, hybrid nuclear energy system and cosmic ray calculations. The validity of its physical models has been benchmarked against a variety of experimental data over a wide range of energies, ranging from accelerator data to cosmic ray showers in the earth atmosphere. The code is presently undergoing several developments in order to better fit the needs of space applications. The generation of particle spectra according to up-to-date cosmic ray data as well as the effect of the solar and geomagnetic modulation have been implemented and already successfully applied to a variety of problems. The implementation of suitable models for heavy ion nuclear interactions has reached an operational stage. At medium/high energy FLUKA is using the DPMJET model. The major task of incorporating heavy ion interactions from a few GeV/n down to the threshold for inelastic collisions is also progressing and promising results have been obtained using a modified version of the RQMD-2.4 code. This interim solution is now fully operational, while waiting for the development of new models based on the FLUKA hadron-nucleus interaction code, a newly developed QMD code, and the implementation of the Boltzmann master equation theory for low energy ion interactions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kawai, Hiroyuki; Morimoto, Akihito; Higuchi, Kenichi; Sawahashi, Mamoru
This paper investigates the gain of inter-Node B macro diversity for a scheduled-based shared channel using single-carrier FDMA radio access in the Evolved UTRA (UMTS Terrestrial Radio Access) uplink based on system-level simulations. More specifically, we clarify the gain of inter-Node B soft handover (SHO) with selection combining at the radio frame length level (=10msec) compared to that for hard handover (HHO) for a scheduled-based shared data channel, considering the gains of key packet-specific techniques including channel-dependent scheduling, adaptive modulation and coding (AMC), hybrid automatic repeat request (ARQ) with packet combining, and slow transmission power control (TPC). Simulation results show that the inter-Node B SHO increases the user throughput at the cell edge by approximately 10% for a short cell radius such as 100-300m due to the diversity gain from a sudden change in other-cell interference, which is a feature specific to full scheduled-based packet access. However, it is also shown that the gain of inter-Node B SHO compared to that for HHO is small in a macrocell environment when the cell radius is longer than approximately 500m due to the gains from hybrid ARQ with packet combining, slow TPC, and proportional fairness based channel-dependent scheduling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo
The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the descriptionmore » of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.« less
Luna, M G; Martins, M M; Newton, S M; Costa, S O; Almeida, D F; Ferreira, L C
1997-01-01
Oligonucleotides coding for linear epitopes of the fimbrial colonization factor antigen I (CFA/I) of enterotoxigenic Escherichia coli (ETEC) were cloned and expressed in a deleted form of the Salmonella muenchen flagellin fliC (H1-d) gene. Four synthetic oligonucleotide pairs coding for regions corresponding to amino acids 1 to 15 (region I), amino acids 11 to 25 (region II), amino acids 32 to 45 (region III) and amino acids 88 to 102 (region IV) were synthesized and cloned in the Salmonella flagellin-coding gene. All four hybrid flagellins were exported to the bacterial surface where they produced flagella, but only three constructs were fully motile. Sera recovered from mice immunized with intraperitoneal injections of purified flagella containing region II (FlaII) or region IV (FlaIV) showed high titres against dissociated solid-phase-bound CFA/I subunits. Hybrid flagellins containing region I (FlaI) or region III (FlaIII) elicited a weak immune response as measured in enzyme-linked immunosorbent assay (ELISA) with dissociated CFA/I subunits. None of the sera prepared with purified hybrid flagella were able to agglutinate or inhibit haemagglutination promoted by CFA/I-positive strains. Moreover, inhibition ELISA tests indicated that antisera directed against region I, II, III or IV cloned in flagellin were not able to recognize surface-exposed regions on the intact CFA/I fimbriae.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prody, C.A.; Zevin-Sonkin, D.; Gnatt, A.
1987-06-01
To study the primary structure and regulation of human cholinesterases, oligodeoxynucleotide probes were prepared according to a consensus peptide sequence present in the active site of both human serum pseudocholinesterase and Torpedo electric organ true acetylcholinesterase. Using these probes, the authors isolated several cDNA clones from lambdagt10 libraries of fetal brain and liver origins. These include 2.4-kilobase cDNA clones that code for a polypeptide containing a putative signal peptide and the N-terminal, active site, and C-terminal peptides of human BtChoEase, suggesting that they code either for BtChoEase itself or for a very similar but distinct fetal form of cholinesterase. Inmore » RNA blots of poly(A)/sup +/ RNA from the cholinesterase-producing fetal brain and liver, these cDNAs hybridized with a single 2.5-kilobase band. Blot hybridization to human genomic DNA revealed that these fetal BtChoEase cDNA clones hybridize with DNA fragments of the total length of 17.5 kilobases, and signal intensities indicated that these sequences are not present in many copies. Both the cDNA-encoded protein and its nucleotide sequence display striking homology to parallel sequences published for Torpedo AcChoEase. These finding demonstrate extensive homologies between the fetal BtChoEase encoded by these clones and other cholinesterases of various forms and species.« less
Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework.
Berger, Daniel; Logsdail, Andrew J; Oberhofer, Harald; Farrow, Matthew R; Catlow, C Richard A; Sherwood, Paul; Sokol, Alexey A; Blum, Volker; Reuter, Karsten
2014-07-14
We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capability by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO2(110).
Embedded-cluster calculations in a numeric atomic orbital density-functional theory framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, Daniel, E-mail: daniel.berger@ch.tum.de; Oberhofer, Harald; Reuter, Karsten
2014-07-14
We integrate the all-electron electronic structure code FHI-aims into the general ChemShell package for solid-state embedding quantum and molecular mechanical (QM/MM) calculations. A major undertaking in this integration is the implementation of pseudopotential functionality into FHI-aims to describe cations at the QM/MM boundary through effective core potentials and therewith prevent spurious overpolarization of the electronic density. Based on numeric atomic orbital basis sets, FHI-aims offers particularly efficient access to exact exchange and second order perturbation theory, rendering the established QM/MM setup an ideal tool for hybrid and double-hybrid level density functional theory calculations of solid systems. We illustrate this capabilitymore » by calculating the reduction potential of Fe in the Fe-substituted ZSM-5 zeolitic framework and the reaction energy profile for (photo-)catalytic water oxidation at TiO{sub 2}(110)« less
Kalyanaraman, Ananth; Cannon, William R; Latt, Benjamin; Baxter, Douglas J
2011-11-01
A MapReduce-based implementation called MR-MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs. The source code along with user documentation are available on http://compbio.eecs.wsu.edu/MR-MSPolygraph. ananth@eecs.wsu.edu; william.cannon@pnnl.gov. Supplementary data are available at Bioinformatics online.
Investigation of the Thermomechanical Response of Shape Memory Alloy Hybrid Composite Beams
NASA Technical Reports Server (NTRS)
Davis, Brian A.
2005-01-01
Previous work at NASA Langley Research Center (LaRC) involved fabrication and testing of composite beams with embedded, pre-strained shape memory alloy (SMA) ribbons. That study also provided comparison of experimental results with numerical predictions from a research code making use of a new thermoelastic model for shape memory alloy hybrid composite (SMAHC) structures. The previous work showed qualitative validation of the numerical model. However, deficiencies in the experimental-numerical correlation were noted and hypotheses for the discrepancies were given for further investigation. The goal of this work is to refine the experimental measurement and numerical modeling approaches in order to better understand the discrepancies, improve the correlation between prediction and measurement, and provide rigorous quantitative validation of the numerical model. Thermal buckling, post-buckling, and random responses to thermal and inertial (base acceleration) loads are studied. Excellent agreement is achieved between the predicted and measured results, thereby quantitatively validating the numerical tool.
Recombinant pinoresinol/lariciresinol reductase, recombinant dirigent protein, and methods of use
Lewis, Norman G.; Davin, Laurence B.; Dinkova-Kostova, Albena T.; Fujita, Masayuki; Gang, David R.; Sarkanen, Simo; Ford, Joshua D.
2001-04-03
Dirigent proteins and pinoresinol/lariciresinol reductases have been isolated, together with cDNAs encoding dirigent proteins and pinoresinol/lariciresinol reductases. Accordingly, isolated DNA sequences are provided which code for the expression of dirigent proteins and pinoresinol/lariciresinol reductases. In other aspects, replicable recombinant cloning vehicles are provided which code for dirigent proteins or pinoresinol/lariciresinol reductases or for a base sequence sufficiently complementary to at least a portion of dirigent protein or pinoresinol/lariciresinol reductase DNA or RNA to enable hybridization therewith. In yet other aspects, modified host cells are provided that have been transformed, transfected, infected and/or injected with a recombinant cloning vehicle and/or DNA sequence encoding dirigent protein or pinoresinol/lariciresinol reductase. Thus, systems and methods are provided for the recombinant expression of dirigent proteins and/or pinoresinol/lariciresinol reductases.
LHCb migration from Subversion to Git
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.; Closier, J.; Cattaneo, M.
2017-10-01
Due to user demand and to support new development workflows based on code review and multiple development streams, LHCb decided to port the source code management from Subversion to Git, using the CERN GitLab hosting service. Although tools exist for this kind of migration, LHCb specificities and development models required careful planning of the migration, development of migration tools, changes to the development model, and redefinition of the release procedures. Moreover we had to support a hybrid situation with some software projects hosted in Git and others still in Subversion, or even branches of one projects hosted in different systems. We present the way we addressed the special LHCb requirements, the technical details of migrating large non standard Subversion repositories, and how we managed to smoothly migrate the software projects following the schedule of each project manager.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
A four stage approach for ontology-based health information system design.
Kuziemsky, Craig E; Lau, Francis
2010-11-01
To describe and illustrate a four stage methodological approach to capture user knowledge in a biomedical domain area, use that knowledge to design an ontology, and then implement and evaluate the ontology as a health information system (HIS). A hybrid participatory design-grounded theory (GT-PD) method was used to obtain data and code them for ontology development. Prototyping was used to implement the ontology as a computer-based tool. Usability testing evaluated the computer-based tool. An empirically derived domain ontology and set of three problem-solving approaches were developed as a formalized model of the concepts and categories from the GT coding. The ontology and problem-solving approaches were used to design and implement a HIS that tested favorably in usability testing. The four stage approach illustrated in this paper is useful for designing and implementing an ontology as the basis for a HIS. The approach extends existing ontology development methodologies by providing an empirical basis for theory incorporated into ontology design. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
A Sub-filter Scale Noise Equation far Hybrid LES Simulations
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.
Blade Assessment for Ice Impact (BLASIM). User's manual, version 1.0
NASA Technical Reports Server (NTRS)
Reddy, E. S.; Abumeri, G. H.
1993-01-01
The Blade Assessment Ice Impact (BLASIM) computer code can analyze solid, hollow, composite, and super hybrid blades. The solid blade is made up of a single material where hollow, composite, and super hybrid blades are constructed with prescribed composite layup. The properties of a composite blade can be specified by inputting one of two options: (1) individual ply properties, or (2) fiber/matrix combinations. When the second option is selected, BLASIM utilizes ICAN (Integrated Composite ANalyzer) to generate the temperature/moisture dependent ply properties of the composite blade. Two types of geometry input can be given: airfoil coordinates or NASTRAN type finite element model. These features increase the flexibility of the program. The user's manual provides sample cases to facilitate efficient use of the code while gaining familiarity.
Li, Ying
2016-09-16
Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.
Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations
NASA Astrophysics Data System (ADS)
Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.
2017-12-01
A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.
NASA Astrophysics Data System (ADS)
Miki, Nobuhiko; Atarashi, Hiroyuki; Higuchi, Kenichi; Sawahashi, Mamoru; Nakagawa, Masao
This paper presents experimental evaluations of the effect of time diversity obtained by hybrid automatic repeat request (HARQ) with soft combining in space and path diversity schemes on orthogonal frequency division multiplexing (OFDM)-based packet radio access in a downlink broadband multipath fading channel. The effect of HARQ is analyzed through laboratory experiments employing fading simulators and field experiments conducted in downtown Yokosuka near Tokyo. After confirming the validity of experimental results based on numerical analysis of the time diversity gain in HARQ, we show by the experimental results that, for a fixed modulation and channel coding scheme (MCS), time diversity obtained by HARQ is effective in reducing the required received signal-to-interference plus noise power ratio (SINR) according to an increase in the number of transmissions, K, up to 10, even when the diversity effects are obtained through two-branch antenna diversity reception and path diversity using a number of multipaths greater than 12 observed in a real fading channel. Meanwhile, in combined use with the adaptive modulation and channel coding (AMC) scheme associated with space and path diversity, we clarify that the gain obtained by time diversity is almost saturated at the maximum number of transmissions in HARQ, K' = 4 in Chase combining and K' = 2 in Incremental redundancy, since the improvement in the residual packet error rate (PER) obtained through time diversity becomes small owing to the low PER in the initial packet transmission arising from appropriately selecting the optimum MCS in AMC. However, the experimental results elucidate that the time diversity in HARQ with soft combining associated with antenna diversity reception is effective in improving the throughput even in a broadband multipath channel with sufficient path diversity.
Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry
NASA Astrophysics Data System (ADS)
Xie, Tianwu; Kuster, Niels; Zaidi, Habib
2017-04-01
Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, J.; Yuan, B.; Jin, M.
2012-07-01
Three-dimensional neutronics optimization calculations were performed to analyse the parameters of Tritium Breeding Ratio (TBR) and maximum average Power Density (PDmax) in a helium-cooled multi-functional experimental fusion-fission hybrid reactor named FDS (Fusion-Driven hybrid System)-MFX (Multi-Functional experimental) blanket. Three-stage tests will be carried out successively, in which the tritium breeding blanket, uranium-fueled blanket and spent-fuel-fueled blanket will be utilized respectively. In this contribution, the most significant and main goal of the FDS-MFX blanket is to achieve the PDmax of about 100 MW/m3 with self-sustaining tritium (TBR {>=} 1.05) based on the second-stage test with uranium-fueled blanket to check and validate themore » demonstrator reactor blanket relevant technologies based on the viable fusion and fission technologies. Four different enriched uranium materials were taken into account to evaluate PDmax in subcritical blanket: (i) natural uranium, (ii) 3.2% enriched uranium, (iii) 19.75% enriched uranium, and (iv) 64.4% enriched uranium carbide. These calculations and analyses were performed using a home-developed code VisualBUS and Hybrid Evaluated Nuclear Data Library (HENDL). The results showed that the performance of the blanket loaded with 64.4% enriched uranium was the most attractive and it could be promising to effectively obtain tritium self-sufficiency (TBR-1.05) and a high maximum average power density ({approx}100 MW/m{sup 3}) when the blanket was loaded with the mass of {sup 235}U about 1 ton. (authors)« less
Zhang, Yu; Yao, Youlin; Jiang, Siyuan; Lu, Yilu; Liu, Yunqiang; Tao, Dachang; Zhang, Sizhong; Ma, Yongxin
2015-04-01
To identify protein-protein interaction partners of PER1 (period circadian protein homolog 1), key component of the molecular oscillation system of the circadian rhythm in tumors using bacterial two-hybrid system technique. Human cervical carcinoma cell Hela library was adopted. Recombinant bait plasmid pBT-PER1 and pTRG cDNA plasmid library were cotransformed into the two-hybrid system reporter strain cultured in a special selective medium. Target clones were screened. After isolating the positive clones, the target clones were sequenced and analyzed. Fourteen protein coding genes were identified, 4 of which were found to contain whole coding regions of genes, which included optic atrophy 3 protein (OPA3) associated with mitochondrial dynamics and homo sapiens cutA divalent cation tolerance homolog of E. coli (CUTA) associated with copper metabolism. There were also cellular events related proteins and proteins which are involved in biochemical reaction and signal transduction-related proteins. Identification of potential interacting proteins with PER1 in tumors may provide us new insights into the functions of the circadian clock protein PER1 during tumorigenesis.
Zhou, Yanrong; Lin, Yanli; Wu, Xiaojie; Xiong, Fuyin; Lv, Yuemeng; Zheng, Tao; Huang, Peitang; Chen, Hongxing
2012-02-01
Transgene expression for the mammary gland bioreactor aimed at producing recombinant proteins requires optimized expression vector construction. Previously we presented a hybrid gene locus strategy, which was originally tested with human lactoferrin (hLF) as target transgene, and an extremely high-level expression of rhLF ever been achieved as to 29.8 g/l in mice milk. Here to demonstrate the broad application of this strategy, another 38.4 kb mWAP-htPA hybrid gene locus was constructed, in which the 3-kb genomic coding sequence in the 24-kb mouse whey acidic protein (mWAP) gene locus was substituted by the 17.4-kb genomic coding sequence of human tissue plasminogen activator (htPA), exactly from the start codon to the end codon. Corresponding five transgenic mice lines were generated and the highest expression level of rhtPA in the milk attained as to 3.3 g/l. Our strategy will provide a universal way for the large-scale production of pharmaceutical proteins in the mammary gland of transgenic animals.
Combustion performance and scale effect from N2O/HTPB hybrid rocket motor simulations
NASA Astrophysics Data System (ADS)
Shan, Fanli; Hou, Lingyun; Piao, Ying
2013-04-01
HRM code for the simulation of N2O/HTPB hybrid rocket motor operation and scale effect analysis has been developed. This code can be used to calculate motor thrust and distributions of physical properties inside the combustion chamber and nozzle during the operational phase by solving the unsteady Navier-Stokes equations using a corrected compressible difference scheme and a two-step, five species combustion model. A dynamic fuel surface regression technique and a two-step calculation method together with the gas-solid coupling are applied in the calculation of fuel regression and the determination of combustion chamber wall profile as fuel regresses. Both the calculated motor thrust from start-up to shut-down mode and the combustion chamber wall profile after motor operation are in good agreements with experimental data. The fuel regression rate equation and the relation between fuel regression rate and axial distance have been derived. Analysis of results suggests improvements in combustion performance to the current hybrid rocket motor design and explains scale effects in the variation of fuel regression rate with combustion chamber diameter.
Fatigue Life Analysis of Tapered Hybrid Composite Flexbeams
NASA Technical Reports Server (NTRS)
Murri, Gretchen B.; Schaff, Jeffery R.; Dobyns, Alan L.
2002-01-01
Nonlinear-tapered flexbeam laminates from a full-size composite helicopter rotor hub flexbeam were tested under combined constant axial tension and cyclic bending loads. The two different graphite/glass hybrid configurations tested under cyclic loading failed by delamination in the tapered region. A 2-D finite element model was developed which closely approximated the flexbeam geometry, boundary conditions, and loading. The analysis results from two geometrically nonlinear finite element codes, ANSYS and ABAQUS, are presented and compared. Strain energy release rates (G) obtained from the above codes using the virtual crack closure technique (VCCT) at a resin crack location in the flexbeams are presented for both hybrid material types. These results compare well with each other and suggest that the initial delamination growth from the resin crack toward the thick region of the flexbeam is strongly mode II. The peak calculated G values were used with material characterization data to calculate fatigue life curves and compared with test data. A curve relating maximum surface strain to number of loading cycles at delamination onset compared reasonably well with the test results.
Illustration of Some Consequences of the Indistinguishability of Electrons
ERIC Educational Resources Information Center
Moore, John W.; Davies, William G.
1976-01-01
Discusses how color-coded overhead transparencies of computer-generated dot-density diagrams can be used to illustrate hybrid orbitals and the principle of the indistinguishability of electrons. (MLH)
Development of the general interpolants method for the CYBER 200 series of supercomputers
NASA Technical Reports Server (NTRS)
Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.
1988-01-01
The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali
1997-01-01
Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.
Chromatin accessibility prediction via a hybrid deep convolutional neural network.
Liu, Qiao; Xia, Fei; Yin, Qijin; Jiang, Rui
2018-03-01
A majority of known genetic variants associated with human-inherited diseases lie in non-coding regions that lack adequate interpretation, making it indispensable to systematically discover functional sites at the whole genome level and precisely decipher their implications in a comprehensive manner. Although computational approaches have been complementing high-throughput biological experiments towards the annotation of the human genome, it still remains a big challenge to accurately annotate regulatory elements in the context of a specific cell type via automatic learning of the DNA sequence code from large-scale sequencing data. Indeed, the development of an accurate and interpretable model to learn the DNA sequence signature and further enable the identification of causative genetic variants has become essential in both genomic and genetic studies. We proposed Deopen, a hybrid framework mainly based on a deep convolutional neural network, to automatically learn the regulatory code of DNA sequences and predict chromatin accessibility. In a series of comparison with existing methods, we show the superior performance of our model in not only the classification of accessible regions against background sequences sampled at random, but also the regression of DNase-seq signals. Besides, we further visualize the convolutional kernels and show the match of identified sequence signatures and known motifs. We finally demonstrate the sensitivity of our model in finding causative noncoding variants in the analysis of a breast cancer dataset. We expect to see wide applications of Deopen with either public or in-house chromatin accessibility data in the annotation of the human genome and the identification of non-coding variants associated with diseases. Deopen is freely available at https://github.com/kimmo1019/Deopen. ruijiang@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Bearing performance degradation assessment based on time-frequency code features and SOM network
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei
2017-04-01
Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.
Dhawan, Sunita Singh; Shukla, Preeti; Gupta, Pankhuri; Lal, R K
2016-05-01
Ocimum (Lamiaceae) is an important source of essential oils and aroma chemicals especially eugenol, methyl eugenol, linalool, methyl chavicol etc. An elite evergreen hybrid has been developed from Ocimum kilimandscharicum and Ocimum basilicum, which demonstrated adaptive behavior towards cold stress. A comparative molecular analysis has been done through RAPD, AFLP, and ISSR among O. basilicum and O. kilimandscharicum and their evergreen cold-tolerant hybrid. The RAPD and AFLP analyses demonstrated similar results, i.e., the hybrid of O. basilicum and O. kilimandscharicum shares the same cluster with O. kilimandscharicum, while O. basilicum behaves as an outgroup, whereas in ISSR analysis, the hybrid genotype grouped in the same cluster with O. basilicum. Ocimum genotypes were analyzed and compared for their trichome density. There were distinct differences on morphology, distribution, and structure between the two kinds of trichomes, i.e., glandular and non-glandular. Glandular trichomes contain essential oils, polyphenols, flavonoids, and acid polysaccharides. Hair-like trichomes, i.e., non-glandular trichomes, help in keeping the frost away from the living surface cells. O. basilicum showed less number of non-glandular trichomes on leaves compared to O. kilimandscharicum and the evergreen cold-tolerant hybrid. Trichomes were analyzed in O. kilimandscharicum, O. basilicum, and their hybrid. An increased proline content at the biochemical level represents a higher potential to survive in a stress condition like cold stress. In our analysis, the proline content is quite higher in tolerant variety O. kilimandscharicum, low in susceptible variety O. basilicum, and intermediate in the hybrid. Gene expression analysis was done in O. basilicum, O. kilimandscharicum and their hybrid for TTG1, GTL1, and STICHEL gene locus which regulates trichome development and its formation and transcription factors WRKY and MPS involved in the regulation of plant responses to freezing and cold. The analysis showed that O. kilimandscharicum and the hybrid were very close to each other but O. basilicum was more distinct in all respects. The overexpression of the WRKY coding gene showed high expression in the hybrid as compared to O. kilimandscharicum and O. basilicum and the transcription factor microspore-specific (MPS) promoter has also shown overexpression in the hybrid for its response against cold stress. The developed evergreen interspecific hybrid may thus provide a base to various industries which are dependent upon the bioactive constituents of Ocimum species.
MIL-STD-1553 dynamic bus controller/remote terminal hybrid set
NASA Astrophysics Data System (ADS)
Friedman, S. N.
This paper describes the performance, physical and electrical requirements of a Dual Redundant BUS Interface Unit (BIU) acting as a BUS Controller Interface Unit (BCIU) or Remote Terminal Unit (RTU) between a Motorola 68000 VME BUS and MIL-STD-1553B Multiplex Data Bus. A discussion of how the BIU Hybrid set is programmed, and operates as a BCIU or RTU, will be included. This paper will review Dynamic Bus Control and other Mode Code capabilities. The BIU Hybrid Set interfaces to a 68000 Microprocessor with a VME Bus using programmed I/O transfers. This special interface will be discussed along with the internal Dual Access Memory (4K x 16) used to support the data exchanges between the CPU and the BIU Hybrid Set. The hybrid set's physical size and power requirements will be covered. This includes the present Double Eurocard the BIU function is presently being offered on.
Seligmann, Hervé
2013-03-01
Usual DNA→RNA transcription exchanges T→U. Assuming different systematic symmetric nucleotide exchanges during translation, some GenBank RNAs match exactly human mitochondrial sequences (exchange rules listed in decreasing transcript frequencies): C↔U, A↔U, A↔U+C↔G (two nucleotide pairs exchanged), G↔U, A↔G, C↔G, none for A↔C, A↔G+C↔U, and A↔C+G↔U. Most unusual transcripts involve exchanging uracil. Independent measures of rates of rare replicational enzymatic DNA nucleotide misinsertions predict frequencies of RNA transcripts systematically exchanging the corresponding misinserted nucleotides. Exchange transcripts self-hybridize less than other gene regions, self-hybridization increases with length, suggesting endoribonuclease-limited elongation. Blast detects stop codon depleted putative protein coding overlapping genes within exchange-transcribed mitochondrial genes. These align with existing GenBank proteins (mainly metazoan origins, prokaryotic and viral origins underrepresented). These GenBank proteins frequently interact with RNA/DNA, are membrane transporters, or are typical of mitochondrial metabolism. Nucleotide exchange transcript frequencies increase with overlapping gene densities and stop densities, indicating finely tuned counterbalancing regulation of expression of systematic symmetric nucleotide exchange-encrypted proteins. Such expression necessitates combined activities of suppressor tRNAs matching stops, and nucleotide exchange transcription. Two independent properties confirm predicted exchanged overlap coding genes: discrepancy of third codon nucleotide contents from replicational deamination gradients, and codon usage according to circular code predictions. Predictions from both properties converge, especially for frequent nucleotide exchange types. Nucleotide exchanging transcription apparently increases coding densities of protein coding genes without lengthening genomes, revealing unsuspected functional DNA coding potential. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Cloutier, Sara C; Wang, Siwen; Ma, Wai Kit; Al Husini, Nadra; Dhoondia, Zuzer; Ansari, Athar; Pascuzzi, Pete E; Tran, Elizabeth J
2016-02-04
Long non-coding (lnc)RNAs, once thought to merely represent noise from imprecise transcription initiation, have now emerged as major regulatory entities in all eukaryotes. In contrast to the rapidly expanding identification of individual lncRNAs, mechanistic characterization has lagged behind. Here we provide evidence that the GAL lncRNAs in the budding yeast S. cerevisiae promote transcriptional induction in trans by formation of lncRNA-DNA hybrids or R-loops. The evolutionarily conserved RNA helicase Dbp2 regulates formation of these R-loops as genomic deletion or nuclear depletion results in accumulation of these structures across the GAL cluster gene promoters and coding regions. Enhanced transcriptional induction is manifested by lncRNA-dependent displacement of the Cyc8 co-repressor and subsequent gene looping, suggesting that these lncRNAs promote induction by altering chromatin architecture. Moreover, the GAL lncRNAs confer a competitive fitness advantage to yeast cells because expression of these non-coding molecules correlates with faster adaptation in response to an environmental switch. Copyright © 2016 Elsevier Inc. All rights reserved.
M3D-K Simulations of Beam-Driven Alfven Eigenmodes in ASDEX-U
NASA Astrophysics Data System (ADS)
Wang, Ge; Fu, Guoyong; Lauber, Philipp; Schneller, Mirjam
2013-10-01
Core-localized Alfven eigenmodes are often observed in neutral beam-heated plasma in ASDEX-U tokamak. In this work, hybrid simulations with the global kinetic/MHD hybrid code M3D-K have been carried out to investigate the linear stability and nonlinear dynamics of beam-driven Alfven eigenmodes using experimental parameters and profiles of an ASDEX-U discharge. The safety factor q profile is weakly reversed with minimum q value about qmin = 3.0. The simulation results show that the n = 3 mode transits from a reversed shear Alfven eigenmode (RSAE) to a core-localized toroidal Alfven eigenmode (TAE) as qmin drops from 3.0 to 2.79, consistent with results from the stability code NOVA as well as the experimental measurement. The M3D-K results are being compared with those of the linear gyrokinetic stability code LIGKA for benchmark. The simulation results will also be compared with the measured mode frequency and mode structure. This work was funded by the Max-Planck/Princeton Center for Plasma Physics.
Optimizing Tensor Contraction Expressions for Hybrid CPU-GPU Execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste
2013-03-01
Tensor contractions are generalized multidimensional matrix multiplication operations that widely occur in quantum chemistry. Efficient execution of tensor contractions on Graphics Processing Units (GPUs) requires several challenges to be addressed, including index permutation and small dimension-sizes reducing thread block utilization. Moreover, to apply the same optimizations to various expressions, we need a code generation tool. In this paper, we present our approach to automatically generate CUDA code to execute tensor contractions on GPUs, including management of data movement between CPU and GPU. To evaluate our tool, GPU-enabled code is generated for the most expensive contractions in CCSD(T), a key coupledmore » cluster method, and incorporated into NWChem, a popular computational chemistry suite. For this method, we demonstrate speedup over a factor of 8.4 using one GPU (instead of one core per node) and over 2.6 when utilizing the entire system using hybrid CPU+GPU solution with 2 GPUs and 5 cores (instead of 7 cores per node). Finally, we analyze the implementation behavior on future GPU systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hager, Robert; Lang, Jianying; Chang, C. S.
As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons. Here, two representative long wavelength modes, shear Alfven waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries.
Hager, Robert; Lang, Jianying; Chang, C. S.; ...
2017-05-24
As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons. Here, two representative long wavelength modes, shear Alfven waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries.
Multiple pathogen biomarker detection using an encoded bead array in droplet PCR.
Periyannan Rajeswari, Prem Kumar; Soderberg, Lovisa M; Yacoub, Alia; Leijon, Mikael; Andersson Svahn, Helene; Joensson, Haakan N
2017-08-01
We present a droplet PCR workflow for detection of multiple pathogen DNA biomarkers using fluorescent color-coded Luminex® beads. This strategy enables encoding of multiple singleplex droplet PCRs using a commercially available bead set of several hundred distinguishable fluorescence codes. This workflow provides scalability beyond the limited number offered by fluorescent detection probes such as TaqMan probes, commonly used in current multiplex droplet PCRs. The workflow was validated for three different Luminex bead sets coupled to target specific capture oligos to detect hybridization of three microorganisms infecting poultry: avian influenza, infectious laryngotracheitis virus and Campylobacter jejuni. In this assay, the target DNA was amplified with fluorescently labeled primers by PCR in parallel in monodisperse picoliter droplets, to avoid amplification bias. The color codes of the Luminex detection beads allowed concurrent and accurate classification of the different bead sets used in this assay. The hybridization assay detected target DNA of all three microorganisms with high specificity, from samples with average target concentration of a single DNA template molecule per droplet. This workflow demonstrates the possibility of increasing the droplet PCR assay detection panel to detect large numbers of targets in parallel, utilizing the scalability offered by the color-coded Luminex detection beads. Copyright © 2017. Published by Elsevier B.V.
Particle/Continuum Hybrid Simulation in a Parallel Computing Environment
NASA Technical Reports Server (NTRS)
Baganoff, Donald
1996-01-01
The objective of this study was to modify an existing parallel particle code based on the direct simulation Monte Carlo (DSMC) method to include a Navier-Stokes (NS) calculation so that a hybrid solution could be developed. In carrying out this work, it was determined that the following five issues had to be addressed before extensive program development of a three dimensional capability was pursued: (1) find a set of one-sided kinetic fluxes that are fully compatible with the DSMC method, (2) develop a finite volume scheme to make use of these one-sided kinetic fluxes, (3) make use of the one-sided kinetic fluxes together with DSMC type boundary conditions at a material surface so that velocity slip and temperature slip arise naturally for near-continuum conditions, (4) find a suitable sampling scheme so that the values of the one-sided fluxes predicted by the NS solution at an interface between the two domains can be converted into the correct distribution of particles to be introduced into the DSMC domain, (5) carry out a suitable number of tests to confirm that the developed concepts are valid, individually and in concert for a hybrid scheme.
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; ...
2017-02-21
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. Here, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by constructionmore » captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. This approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.« less
Active Low Intrusion Hybrid Monitor for Wireless Sensor Networks
Navia, Marlon; Campelo, Jose C.; Bonastre, Alberto; Ors, Rafael; Capella, Juan V.; Serrano, Juan J.
2015-01-01
Several systems have been proposed to monitor wireless sensor networks (WSN). These systems may be active (causing a high degree of intrusion) or passive (low observability inside the nodes). This paper presents the implementation of an active hybrid (hardware and software) monitor with low intrusion. It is based on the addition to the sensor node of a monitor node (hardware part) which, through a standard interface, is able to receive the monitoring information sent by a piece of software executed in the sensor node. The intrusion on time, code, and energy caused in the sensor nodes by the monitor is evaluated as a function of data size and the interface used. Then different interfaces, commonly available in sensor nodes, are evaluated: serial transmission (USART), serial peripheral interface (SPI), and parallel. The proposed hybrid monitor provides highly detailed information, barely disturbed by the measurement tool (interference), about the behavior of the WSN that may be used to evaluate many properties such as performance, dependability, security, etc. Monitor nodes are self-powered and may be removed after the monitoring campaign to be reused in other campaigns and/or WSNs. No other hardware-independent monitoring platforms with such low interference have been found in the literature. PMID:26393604
NASA Astrophysics Data System (ADS)
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; Kronik, Leeor; Neaton, Jeffrey B.
2017-03-01
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. In this work, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by construction captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. Our approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.
NASA Astrophysics Data System (ADS)
Wei, Wei; Bo-Jiang, Ding; Y, Peysson; J, Decker; Miao-Hui, Li; Xin-Jun, Zhang; Xiao-Jie, Wang; Lei, Zhang
2016-01-01
The optimized synergy conditions between electron cyclotron current drive (ECCD) and lower hybrid current drive (LHCD) with normal parameters of the EAST tokamak are studied by using the C3PO/LUKE code based on the understanding of the synergy mechanisms so as to obtain a higher synergistic current and provide theoretical reference for the synergistic effect in the EAST experiment. The dependences of the synergistic effect on the parameters of two waves (lower hybrid wave (LHW) and electron cyclotron wave (ECW)), including the radial position of the power deposition, the power value of the LH and EC waves, and the parallel refractive indices of the LHW (N∥) are presented and discussed. Project supported by the National Magnetic Confinement Fusion Science Program of China (Grant Nos. 2011GB102000, 2012GB103000, and 2013GB106001), the National Natural Science Foundation of China (Grant Nos. 11175206 and 11305211), the JSPS-NRF-NSFC A3 Foresight Program in the Field of Plasma Physics (Grant No. 11261140328), and the Fundamental Research Funds for the Central Universities of China (Grant No. JZ2015HGBZ0472).
Reconfigurable Polymer Shells on Shape-Anisotropic Gold Nanoparticle Cores.
Kim, Juyeong; Song, Xiaohui; Kim, Ahyoung; Luo, Binbin; Smith, John W; Ou, Zihao; Wu, Zixuan; Chen, Qian
2018-05-03
Reconfigurable hybrid nanoparticles made by decorating flexible polymer shells on rigid inorganic nanoparticle cores can provide a unique means to build stimuli-responsive functional materials. The polymer shell reconfiguration has been expected to depend on the local core shape details, but limited systematic investigations have been undertaken. Here, two literature methods are adapted to coat either thiol-terminated polystyrene (PS) or polystyrene-poly(acrylic acid) (PS-b-PAA) shells onto a series of anisotropic gold nanoparticles of shapes not studied previously, including octahedron, concave cube, and bipyramid. These core shapes are complex, rendering shell contours with nanoscale details (e.g., local surface curvature, shell thickness) that are imaged and analyzed quantitatively using the authors' customized analysis codes. It is found that the hybrid nanoparticles based on the chosen core shapes, when coated with the above two polymer shells, exhibit distinct shell segregations upon a variation in solvent polarity or temperature. It is demonstrated for the PS-b-PAA-coated hybrid nanoparticles, the shell segregation is maintained even after a further decoration of the shell periphery with gold seeds; these seeds can potentially facilitate subsequent deposition of other nanostructures to enrich structural and functional diversity. These synthesis, imaging, and analysis methods for the hybrid nanoparticles of anisotropically shaped cores can potentially aid in their predictive design for materials reconfigurable from the bottom up. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Celik, Metin
2009-03-01
The International Safety Management (ISM) Code defines a broad framework for the safe management and operation of merchant ships, maintaining high standards of safety and environmental protection. On the other hand, ISO 14001:2004 provides a generic, worldwide environmental management standard that has been utilized by several industries. Both the ISM Code and ISO 14001:2004 have the practical goal of establishing a sustainable Integrated Environmental Management System (IEMS) for shipping businesses. This paper presents a hybrid design methodology that shows how requirements from both standards can be combined into a single execution scheme. Specifically, the Analytic Hierarchy Process (AHP) and Fuzzy Axiomatic Design (FAD) are used to structure an IEMS for ship management companies. This research provides decision aid to maritime executives in order to enhance the environmental performance in the shipping industry.
Primout, M.; Babonneau, D.; Jacquet, L.; ...
2015-11-10
We studied the titanium K-shell emission spectra from multi-keV x-ray source experiments with hybrid targets on the OMEGA laser facility. Using the collisional-radiative TRANSPEC code, dedicated to K-shell spectroscopy, we reproduced the main features of the detailed spectra measured with the time-resolved MSPEC spectrometer. We developed a general method to infer the N e, T e and T i characteristics of the target plasma from the spectral analysis (ratio of integrated Lyman-α to Helium-α in-band emission and the peak amplitude of individual line ratios) of the multi-keV x-ray emission. Finally, these thermodynamic conditions are compared to those calculated independently bymore » the radiation-hydrodynamics transport code FCI2.« less
Prody, C A; Zevin-Sonkin, D; Gnatt, A; Goldberg, O; Soreq, H
1987-01-01
To study the primary structure and regulation of human cholinesterases, oligodeoxynucleotide probes were prepared according to a consensus peptide sequence present in the active site of both human serum pseudocholinesterase (BtChoEase; EC 3.1.1.8) and Torpedo electric organ "true" acetylcholinesterase (AcChoEase; EC 3.1.1.7). Using these probes, we isolated several cDNA clones from lambda gt10 libraries of fetal brain and liver origins. These include 2.4-kilobase cDNA clones that code for a polypeptide containing a putative signal peptide and the N-terminal, active site, and C-terminal peptides of human BtChoEase, suggesting that they code either for BtChoEase itself or for a very similar but distinct fetal form of cholinesterase. In RNA blots of poly(A)+ RNA from the cholinesterase-producing fetal brain and liver, these cDNAs hybridized with a single 2.5-kilobase band. Blot hybridization to human genomic DNA revealed that these fetal BtChoEase cDNA clones hybridize with DNA fragments of the total length of 17.5 kilobases, and signal intensities indicated that these sequences are not present in many copies. Both the cDNA-encoded protein and its nucleotide sequence display striking homology to parallel sequences published for Torpedo AcChoEase. These findings demonstrate extensive homologies between the fetal BtChoEase encoded by these clones and other cholinesterases of various forms and species. Images PMID:3035536
Theory, modeling, and integrated studies in the Arase (ERG) project
NASA Astrophysics Data System (ADS)
Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa
2018-02-01
Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.
Recominant Pinoresino-Lariciresinol Reductase, Recombinant Dirigent Protein And Methods Of Use
Lewis, Norman G.; Davin, Laurence B.; Dinkova-Kostova, Albena T.; Fujita, Masayuki , Gang; David R. , Sarkanen; Simo , Ford; Joshua D.
2003-10-21
Dirigent proteins and pinoresinol/lariciresinol reductases have been isolated, together with cDNAs encoding dirigent proteins and pinoresinol/lariciresinol reductases. Accordingly, isolated DNA sequences are provided from source species Forsythia intermedia, Thuja plicata, Tsuga heterophylla, Eucommia ulmoides, Linum usitatissimum, and Schisandra chinensis, which code for the expression of dirigent proteins and pinoresinol/lariciresinol reductases. In other aspects, replicable recombinant cloning vehicles are provided which code for dirigent proteins or pinoresinol/lariciresinol reductases or for a base sequence sufficiently complementary to at least a portion of dirigent protein or pinoresinol/lariciresinol reductase DNA or RNA to enable hybridization therewith. In yet other aspects, modified host cells are provided that have been transformed, transfected, infected and/or injected with a recombinant cloning vehicle and/or DNA sequence encoding dirigent protein or pinoresinol/lariciresinol reductase. Thus, systems and methods are provided for the recombinant expression of dirigent proteins and/or pinoresinol/lariciresinol reductases.
SIERRA/Aero Theory Manual Version 4.46.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal/Fluid Team
2017-09-01
SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less
SIERRA/Aero Theory Manual Version 4.44
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal /Fluid Team
2017-04-01
SIERRA/Aero is a two and three dimensional, node-centered, edge-based finite volume code that approximates the compressible Navier-Stokes equations on unstructured meshes. It is applicable to inviscid and high Reynolds number laminar and turbulent flows. Currently, two classes of turbulence models are provided: Reynolds Averaged Navier-Stokes (RANS) and hybrid methods such as Detached Eddy Simulation (DES). Large Eddy Simulation (LES) models are currently under development. The gas may be modeled either as ideal, or as a non-equilibrium, chemically reacting mixture of ideal gases. This document describes the mathematical models contained in the code, as well as certain implementation details. First, themore » governing equations are presented, followed by a description of the spatial discretization. Next, the time discretization is described, and finally the boundary conditions. Throughout the document, SIERRA/ Aero is referred to simply as Aero for brevity.« less
Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Yamada, Masako
The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less
Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite
Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai
2013-04-01
The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.
Proceedings of the Fourth International Mobile Satellite Conference (IMSC 1995)
NASA Technical Reports Server (NTRS)
Rigley, Jack R. (Compiler); Estabrook, Polly (Compiler); Reekie, D. Hugh M. (Editor)
1995-01-01
The theme to the 1995 International Mobile Satellite Conference was 'Mobile Satcom Comes of Age'. The sessions included Modulation, Coding, and Multiple Access; Hybrid Networks - 1; Spacecraft Technology; propagation; Applications and Experiments - 1; Advanced System Concepts and Analysis; Aeronautical Mobile Satellite Communications; Mobile Terminal Antennas; Mobile Terminal Technology; Current and Planned Systems; Direct Broadcast Satellite; The Use of CDMA for LEO and ICO Mobile Satellite Systems; Hybrid Networks - 2; and Applications and Experiments - 2.
NASA Astrophysics Data System (ADS)
Ohba, Nobuko; Ogata, Shuji; Tamura, Tomoyuki; Kobayashi, Ryo; Yamakawa, Shunsuke; Asahi, Ryoji
2012-02-01
Enhancing the diffusivity of the Li ion in a Li-graphite intercalation compound that has been used as a negative electrode in the Li-ion rechargeable battery, is important in improving both the recharging speed and power of the battery. In the compound, the Li ion creates a long-range stress field around itself by expanding the interlayer spacing of graphite. We advance the hybrid quantum-classical simulation code to include the external electric field in addition to the long-range stress field by first-principles simulation. In the hybrid code, the quantum region selected adaptively around the Li ion is treated using the real-space density-functional theory for electrons. The rest of the system is described with an empirical interatomic potential that includes the term relating to the dispersion force between the C atoms in different layers. Hybrid simulation runs for Li dynamics in graphite are performed at 423 K under various settings of the amplitude and frequency of alternating electric fields perpendicular to C-layers. We find that the in-plane diffusivity of the Li ion is enhanced significantly by the electric field if the amplitude is larger than 0.2 V/Å within its order and the frequency is as high as 1.7 THz. The microscopic mechanisms of the enhancement are explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dokhane, A.; Canepa, S.; Ferroukhi, H.
For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less
A multiphysics and multiscale software environment for modeling astrophysical systems
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; McMillan, Steve; Harfst, Stefan; Groen, Derek; Fujii, Michiko; Nualláin, Breanndán Ó.; Glebbeek, Evert; Heggie, Douglas; Lombardi, James; Hut, Piet; Angelou, Vangelis; Banerjee, Sambaran; Belkus, Houria; Fragos, Tassos; Fregeau, John; Gaburov, Evghenii; Izzard, Rob; Jurić, Mario; Justham, Stephen; Sottoriva, Andrea; Teuben, Peter; van Bever, Joris; Yaron, Ofer; Zemp, Marcel
2009-05-01
We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a "Noah's Ark" milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multiscale and multiphysics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe three examples calculated using MUSE: the merger of two galaxies, the merger of two evolving stars, and a hybrid N-body simulation. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
TORUS: Radiation transport and hydrodynamics code
NASA Astrophysics Data System (ADS)
Harries, Tim
2014-04-01
TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.
with natural gas, hydrogen, or electricity must pay an annual fee of $200. Plug-in hybrid electric vehicle owners must pay an annual fee of $100. (Reference West Virginia Code 17A-10-3c
Adaptive software-defined coded modulation for ultra-high-speed optical transport
NASA Astrophysics Data System (ADS)
Djordjevic, Ivan B.; Zhang, Yequn
2013-10-01
In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.
Center for Extended Magnetohydrodynamics Modeling - Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Scott
This project funding supported approximately 74 percent of a Ph.D. graduate student, not including costs of travel and supplies. We had a highly successful research project including the development of a second-order implicit electromagnetic kinetic ion hybrid model [Cheng 2013, Sturdevant 2016], direct comparisons with the extended MHD NIMROD code and kinetic simulation [Schnack 2013], modeling of slab tearing modes using the fully kinetic ion hybrid model and finally, modeling global tearing modes in cylindrical geometry using gyrokinetic simulation [Chen 2015, Chen 2016]. We developed an electromagnetic second-order implicit kinetic ion fluid electron hybrid model [Cheng 2013]. As a firstmore » step, we assumed isothermal electrons, but have included drift-kinetic electrons in similar models [Chen 2011]. We used this simulation to study the nonlinear evolution of the tearing mode in slab geometry, including nonlinear evolution and saturation [Cheng 2013]. Later, we compared this model directly to extended MHD calculations using the NIMROD code [Schnack 2013]. In this study, we investigated the ion-temperature-gradient instability with an extended MHD code for the first time and got reasonable agreement with the kinetic calculation in terms of linear frequency, growth rate and mode structure. We then extended this model to include orbit averaging and sub-cycling of the ions and compared directly to gyrokinetic theory [Sturdevant 2016]. This work was highlighted in an Invited Talk at the International Conference on the Numerical Simulation of Plasmas in 2015. The orbit averaging sub-cycling multi-scale algorithm is amenable to hybrid architectures with GPUS or math co-processors. Additionally, our participation in the Center for Extend Magnetohydrodynamics motivated our research on developing the capability for gyrokinetic simulation to model a global tearing mode. We did this in cylindrical geometry where the results could be benchmarked with existing eigenmode calculations. First, we developed a gyrokinetic code capable of simulating long wavelengths using a fluid electron model [Chen 2015]. We benchmarked this code with an eigenmode calculation. Besides having to rewrite the field solver due to the breakdown in the gyrokinetic ordering for long wavelengths, very high radial resolution was required. We developed a technique where we used the solution from the eigenmode solver to specify radial boundary conditions allowing for a very high radial resolution of the inner solution. Using this technique enabled us to use our direct algorithm with gyrokinetic ions and drift kinetic electrons [Chen 2016]. This work was highlighted in an Invited Talk at the American Physical Society - Division of Plasma Physics in 2015.« less
Analysis of Effectiveness of Phoenix Entry Reaction Control System
NASA Technical Reports Server (NTRS)
Dyakonov, Artem A.; Glass, Christopher E.; Desai, Prasun, N.; VanNorman, John W.
2008-01-01
Interaction between the external flowfield and the reaction control system (RCS) thruster plumes of the Phoenix capsule during entry has been investigated. The analysis covered rarefied, transitional, hypersonic and supersonic flight regimes. Performance of pitch, yaw and roll control authority channels was evaluated, with specific emphasis on the yaw channel due to its low nominal yaw control authority. Because Phoenix had already been constructed and its RCS could not be modified before flight, an assessment of RCS efficacy along the trajectory was needed to determine possible issues and to make necessary software changes. Effectiveness of the system at various regimes was evaluated using a hybrid DSMC-CFD technique, based on DSMC Analysis Code (DAC) code and General Aerodynamic Simulation Program (GASP), the LAURA (Langley Aerothermal Upwind Relaxation Algorithm) code, and the FUN3D (Fully Unstructured 3D) code. Results of the analysis at hypersonic and supersonic conditions suggest a significant aero-RCS interference which reduced the efficacy of the thrusters and could likely produce control reversal. Very little aero-RCS interference was predicted in rarefied and transitional regimes. A recommendation was made to the project to widen controller system deadbands to minimize (if not eliminate) the use of RCS thrusters through hypersonic and supersonic flight regimes, where their performance would be uncertain.
Hybrid parallel code acceleration methods in full-core reactor physics calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courau, T.; Plagne, L.; Ponicot, A.
2012-07-01
When dealing with nuclear reactor calculation schemes, the need for three dimensional (3D) transport-based reference solutions is essential for both validation and optimization purposes. Considering a benchmark problem, this work investigates the potential of discrete ordinates (Sn) transport methods applied to 3D pressurized water reactor (PWR) full-core calculations. First, the benchmark problem is described. It involves a pin-by-pin description of a 3D PWR first core, and uses a 8-group cross-section library prepared with the DRAGON cell code. Then, a convergence analysis is performed using the PENTRAN parallel Sn Cartesian code. It discusses the spatial refinement and the associated angular quadraturemore » required to properly describe the problem physics. It also shows that initializing the Sn solution with the EDF SPN solver COCAGNE reduces the number of iterations required to converge by nearly a factor of 6. Using a best estimate model, PENTRAN results are then compared to multigroup Monte Carlo results obtained with the MCNP5 code. Good consistency is observed between the two methods (Sn and Monte Carlo), with discrepancies that are less than 25 pcm for the k{sub eff}, and less than 2.1% and 1.6% for the flux at the pin-cell level and for the pin-power distribution, respectively. (authors)« less
Toward enhancing the distributed video coder under a multiview video codec framework
NASA Astrophysics Data System (ADS)
Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua
2016-11-01
The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun
2004-05-01
Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decodingmore » in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques.« less
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
NASA Astrophysics Data System (ADS)
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Atomistic Simulations of Grain Boundary Pinning in CuFe Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zepeda-Ruiz, L A; Gilmer, G H; Sadigh, B
2005-05-22
The authors apply a hybrid Monte Carlo-molecular dynamics code to the study of grain boundary motion upon annealing of pure Cu and Cu with low concentrations of Fe. The hybrid simulations account for segregation and precipitation of the low solubility Fe, together with curvature driven grain boundary motion. Grain boundaries in two different systems, a {Sigma}7+U-shaped half-loop grain and a nanocrystalline sample, were found to be pinned in the presence of Fe concentrations exceeding 3%.
Efficient hybrid-symbolic methods for quantum mechanical calculations
NASA Astrophysics Data System (ADS)
Scott, T. C.; Zhang, Wenxing
2015-06-01
We present hybrid symbolic-numerical tools to generate optimized numerical code for rapid prototyping and fast numerical computation starting from a computer algebra system (CAS) and tailored to any given quantum mechanical problem. Although a major focus concerns the quantum chemistry methods of H. Nakatsuji which has yielded successful and very accurate eigensolutions for small atoms and molecules, the tools are general and may be applied to any basis set calculation with a variational principle applied to its linear and non-linear parameters.
Transport Test Problems for Hybrid Methods Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Analysis of high velocity impact on hybrid composite fan blades
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1979-01-01
Recent developments in the analysis of high velocity impact of composite blades are described, using a computerized capability which consists of coupling a composites mechanics code with the direct-time integration features of NASTRAN. The application of the capability to determine the linear dynamic response of an interply hybrid composite aircraft engine fan blade is described in detail. The results also show that the impact stresses reach sufficiently high magnitudes to cause failures in the impact region at early times of the impact event.
Bråte, Jon; Adamski, Marcin; Neumann, Ralf S; Shalchian-Tabrizi, Kamran; Adamska, Maja
2015-12-22
Long non-coding RNAs (lncRNAs) play important regulatory roles during animal development, and it has been hypothesized that an RNA-based gene regulation was important for the evolution of developmental complexity in animals. However, most studies of lncRNA gene regulation have been performed using model animal species, and very little is known about this type of gene regulation in non-bilaterians. We have therefore analysed RNA-Seq data derived from a comprehensive set of embryogenesis stages in the calcareous sponge Sycon ciliatum and identified hundreds of developmentally expressed intergenic lncRNAs (lincRNAs) in this species. In situ hybridization of selected lincRNAs revealed dynamic spatial and temporal expression during embryonic development. More than 600 lincRNAs constitute integral parts of differentially expressed gene modules, which also contain known developmental regulatory genes, e.g. transcription factors and signalling molecules. This study provides insights into the non-coding gene repertoire of one of the earliest evolved animal lineages, and suggests that RNA-based gene regulation was probably present in the last common ancestor of animals. © 2015 The Authors.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
MaMiCo: Software design for parallel molecular-continuum flow simulations
NASA Astrophysics Data System (ADS)
Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim
2016-03-01
The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
The high-βN hybrid scenario for ITER and FNSF steady-state missions
Turco, Francesca; Petty, Clinton C.; Luce, Timothy C.; ...
2015-05-15
New experiments on DIII-D have demonstrated the steady-state potential of the hybrid scenario, with 1 MA of plasma current driven fully non-inductively and βN up to 3.7 sustained for ~3 s (~1.5 current diffusion time, τ R, in DIII-D), providing the basis for an attractive option for steady-state operation in ITER and FNSF. Excellent confinement is achieved (H 98y2~1.6) without performance limiting tearing modes. Furthermore, the hybrid regime overcomes the need for off-axis current drive efficiency, taking advantage of poloidal magnetic flux pumping that is believed to be the result of a saturated 3/2 tearing mode. This allows for efficientmore » current drive close to the axis, without deleterious sawtooth instabilities. In these experiments, the edge surface loop voltage is driven down to zero for >1 τ R when the poloidal β is increased above 1.9 at a plasma current of 1.0 MA and the ECH power is increased to 3.2 MW. Stationary operation of hybrid plasmas with all on-axis current drive is sustained at pressures slightly above the ideal no-wall limit, while the calculated ideal with-wall MHD limit is β N~4-4.5. Off-axis NBI power has been used to broaden the pressure and current profiles in this scenario, seeking to take advantage of higher predicted kink stability limits and lower values of the tearing stability index Δ', as calculated by the DCON and PEST3 codes. Our results are based on measured profiles that predict ideal limits at βN>4.5, 10% higher than the cases with on-axis NBI. A 0-D model, based on the present confinement, βN and shape values of the DIII-D hybrid scenario, shows that these plasmas are consistent with the ITER 9 MA, Q=5 mission and the FNSF 6.7 MA scenario with Q=3.5. With collisionality and edge safety factor values comparable to those envisioned for ITER and FNSF, the high-βN hybrid represents an attractive high performance option for the steady-state missions of these devices.« less
NASA Astrophysics Data System (ADS)
Nowak, Joshua Michael
A hybrid atmospheric pressure-electrospinning plasma system was developed to be used for the production of nanofibers and enhance their performance for various applications. Electrospun nanofibers are excellent candidates for protective clothing in the field of chemical and biological warfare defense; however, nanofibers are structurally weak and easily abrade and tear. They can be strengthened through the support of a substrate fabric, but they do not adhere well to substrates. Through the use of the developed hybrid system with either pure He or He/O2 (99/1) feed gas, adherence to the substrate along with abrasion and flex resistance were improved. The plasma source was diagnosed electrically, thermally, and optically. An equivalent circuit model was developed for non-thermal, highly collisional plasmas that can solve for average electron temperature and electron number density. The obtained temperatures (~ 3eV) correlate very well with the results of a neutral Bremsstrahlung continuum matching technique that was also employed. Using the temperatures and number densities obtained from the circuit model and the optical spectroscopy, a global chemical kinetics code was written in order to solve for radical and ion concentrations. This code shows that there are significant concentrations of oxygen radicals present. The XPS analysis confirmed that there was an increase of surface oxygen from 11.1% up to 16.6% for the He/O2 plasma and that the C-O bonding, which was not present in the control samples, has increased to 45.4%. The adhesive strength to the substrate has a significant increase of 81% for helium plasma and 144% for He/O2 plasma; however, these values remain below the desired values for protective clothing applications. The hybrid system displayed the ability to oxygenate nanofibers as they are being electrospun and shows the feasibility of making other surface modifications. The developed circuit model and chemical kinetics code both show promise as tools for deterministic atmospheric pressure plasma research in the field of surface modifications.
NASA Astrophysics Data System (ADS)
Preynas, M.; Goniche, M.; Hillairet, J.; Litaudon, X.; Ekedahl, A.; Colas, L.
2013-01-01
To achieve steady-state operation on future fusion devices, in particular on ITER, the coupling of the lower hybrid wave must be optimized on a wide range of edge conditions. However, under some specific conditions, deleterious effects on the lower hybrid current drive (LHCD) coupling are sometimes observed on Tore Supra. In this way, dedicated LHCD experiments have been performed using the LHCD system of Tore Supra, composed of two different conceptual designs of launcher: the fully active multi-junction (FAM) and the new passive active multi-junction (PAM) antennas. A non-linear interaction between the electron density and the electric field has been characterized in a thin plasma layer in front of the two LHCD antennas. The resulting dependence of the power reflection coefficient (RC) with the LHCD power is not predicted by the standard linear theory of the LH wave coupling. A theoretical model is suggested to describe the non-linear wave-plasma interaction induced by the ponderomotive effect and implemented in a new full wave LHCD code, PICCOLO-2D (ponderomotive effect in a coupling code of lower hybrid wave-2D). The code self-consistently treats the wave propagation in the antenna vicinity and its interaction with the local edge plasma density. The simulation reproduces very well the occurrence of a non-linear behaviour in the coupling observed in the LHCD experiments. The important differences and trends between the FAM and the PAM antennas, especially a larger increase in RC for the FAM, are also reproduced by the PICCOLO-2D simulation. The working hypothesis of the contribution of the ponderomotive effect in the non-linear observations of LHCD coupling is therefore validated through this comprehensive modelling for the first time on the FAM and PAM antennas on Tore Supra.
NASA Astrophysics Data System (ADS)
Cui, Z.; Welty, C.; Maxwell, R. M.
2011-12-01
Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.
CFD Predictions for Transonic Performance of the ERA Hybrid Wing-Body Configuration
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Luckring, James M.; McMillin, S. Naomi; Flamm, Jeffrey D.; Roman, Dino
2016-01-01
A computational study was performed for a Hybrid Wing Body configuration that was focused at transonic cruise performance conditions. In the absence of experimental data, two fully independent computational fluid dynamics analyses were conducted to add confidence to the estimated transonic performance predictions. The primary analysis was performed by Boeing with the structured overset-mesh code OVERFLOW. The secondary analysis was performed by NASA Langley Research Center with the unstructured-mesh code USM3D. Both analyses were performed at full-scale flight conditions and included three configurations customary to drag buildup and interference analysis: a powered complete configuration, the configuration with the nacelle/pylon removed, and the powered nacelle in isolation. The results in this paper are focused primarily on transonic performance up to cruise and through drag rise. Comparisons between the CFD results were very good despite some minor geometric differences in the two analyses.
3D unstructured-mesh radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less
Alvarado, David M; Yang, Ping; Druley, Todd E; Lovett, Michael; Gurnett, Christina A
2014-06-01
Despite declining sequencing costs, few methods are available for cost-effective single-nucleotide polymorphism (SNP), insertion/deletion (INDEL) and copy number variation (CNV) discovery in a single assay. Commercially available methods require a high investment to a specific region and are only cost-effective for large samples. Here, we introduce a novel, flexible approach for multiplexed targeted sequencing and CNV analysis of large genomic regions called multiplexed direct genomic selection (MDiGS). MDiGS combines biotinylated bacterial artificial chromosome (BAC) capture and multiplexed pooled capture for SNP/INDEL and CNV detection of 96 multiplexed samples on a single MiSeq run. MDiGS is advantageous over other methods for CNV detection because pooled sample capture and hybridization to large contiguous BAC baits reduces sample and probe hybridization variability inherent in other methods. We performed MDiGS capture for three chromosomal regions consisting of ∼ 550 kb of coding and non-coding sequence with DNA from 253 patients with congenital lower limb disorders. PITX1 nonsense and HOXC11 S191F missense mutations were identified that segregate in clubfoot families. Using a novel pooled-capture reference strategy, we identified recurrent chromosome chr17q23.1q23.2 duplications and small HOXC 5' cluster deletions (51 kb and 12 kb). Given the current interest in coding and non-coding variants in human disease, MDiGS fulfills a niche for comprehensive and low-cost evaluation of CNVs, coding, and non-coding variants across candidate regions of interest. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
HACC: Extreme Scaling and Performance Across Diverse Architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin
2013-11-01
Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.
A novel architecture of non-volatile magnetic arithmetic logic unit using magnetic tunnel junctions
NASA Astrophysics Data System (ADS)
Guo, Wei; Prenat, Guillaume; Dieny, Bernard
2014-04-01
Complementary metal-oxide-semiconductor (CMOS) technology is facing increasingly difficult obstacles such as power consumption and interconnection delay. Novel hybrid technologies and architectures are being investigated with the aim to circumvent some of these limits. In particular, hybrid CMOS/magnetic technology based on magnetic tunnel junctions (MTJs) is considered as a very promising approach thanks to the full compatibility of MTJs with CMOS technology. By tightly merging the conventional electronics with magnetism, both logic and memory functions can be implemented in the same device. As a result, non-volatility is directly brought into logic circuits, yielding significant improvement of device performances and new functionalities as well. We have conceived an innovative methodology to construct non-volatile magnetic arithmetic logic units (MALUs) combining spin-transfer torque MTJs with MOS transistors. The present 4-bit MALU utilizes 4 MTJ pairs to store its operation code (opcode). Its operations and performances have been confirmed and evaluated through electrical simulations.
Silicon based quantum dot hybrid qubits
NASA Astrophysics Data System (ADS)
Kim, Dohun
2015-03-01
The charge and spin degrees of freedom of an electron constitute natural bases for constructing quantum two level systems, or qubits, in semiconductor quantum dots. The quantum dot charge qubit offers a simple architecture and high-speed operation, but generally suffers from fast dephasing due to strong coupling of the environment to the electron's charge. On the other hand, quantum dot spin qubits have demonstrated long coherence times, but their manipulation is often slower than desired for important future applications. This talk will present experimental progress of a `hybrid' qubit, formed by three electrons in a Si/SiGe double quantum dot, which combines desirable characteristics (speed and coherence) in the past found separately in qubits based on either charge or spin degrees of freedom. Using resonant microwaves, we first discuss qubit operations near the `sweet spot' for charge qubit operation. Along with fast (>GHz) manipulation rates for any rotation axis on the Bloch sphere, we implement two independent tomographic characterization schemes in the charge qubit regime: traditional quantum process tomography (QPT) and gate set tomography (GST). We also present resonant qubit operations of the hybrid qubit performed on the same device, DC pulsed gate operations of which were recently demonstrated. We demonstrate three-axis control and the implementation of dynamic decoupling pulse sequences. Performing QPT on the hybrid qubit, we show that AC gating yields π rotation process fidelities higher than 93% for X-axis and 96% for Z-axis rotations, which demonstrates efficient quantum control of semiconductor qubits using resonant microwaves. We discuss a path forward for achieving fidelities better than the threshold for quantum error correction using surface codes. This work was supported in part by ARO (W911NF-12-0607), NSF (PHY-1104660), DOE (DE-FG02-03ER46028), and by the Laboratory Directed Research and Development program at Sandia National Laboratories under contract DE-AC04-94AL85000.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
NASA Astrophysics Data System (ADS)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.
Jongsma, A P; Burgerhout, W G
1977-01-01
Regional localization studies of genes coding for human PGD, PPH1, PGM1, UGPP, GuK1, Pep-C, and FH, which have been assigned to chromosome 1, were performed with man-Chinese hamster somatic cell hybrids, Informative hybrids that retained fragments of the human chromosome 1 were produced by fusion of hamster cells with human cells carrying reciprocal translocations involving chromosome 1. Analysis of the hybrids that retained one of the translocation chromosomes or de novo rearrangements involving the human 1 revealed the following gene positions: PGD and PPH1 in 1pter leads to 1p32, PGM1 in 1p32 leads to 1p22, UGPP and GuK1 in 1q21 leads to 1q42, FH in 1qter leads to 1q42, and Pep-C probably in 1q42.
Litman, G W; Berger, L; Jahn, C L
1982-06-11
High molecular weight genomic DNAs isolated from an insectivore, Tupaia, and a representative reptilian, Caiman, and avian, Gallus, were digested with restriction endonucleases transferred to nitrocellulose and hybridized with nick-translated probes of murine VH genes. The derivations of the probes designated S107V (1) and mu 107V (2,3) have been described previously. Under conditions of reduced stringency, multiple hybridizing components were observed with Tupaia and Caiman; only mu mu 107V exhibited significant hybridization with the separated fragments of Gallus DNA. The nick-translated S107V probe was digested with Fnu4H1 and subinserts corresponding to the 5' and 3' regions both detected multiple hybridizing components in Tupaia and Caiman DNA. A 5' probe lacking the leader sequence identified the same components as the intact 5' probe, suggesting that VH coding regions distant as the reptilians may possess multiple genetic components which exhibit significant homology with murine immunoglobulin in VH regions.
Litman, G W; Berger, L; Jahn, C L
1982-01-01
High molecular weight genomic DNAs isolated from an insectivore, Tupaia, and a representative reptilian, Caiman, and avian, Gallus, were digested with restriction endonucleases transferred to nitrocellulose and hybridized with nick-translated probes of murine VH genes. The derivations of the probes designated S107V (1) and mu 107V (2,3) have been described previously. Under conditions of reduced stringency, multiple hybridizing components were observed with Tupaia and Caiman; only mu mu 107V exhibited significant hybridization with the separated fragments of Gallus DNA. The nick-translated S107V probe was digested with Fnu4H1 and subinserts corresponding to the 5' and 3' regions both detected multiple hybridizing components in Tupaia and Caiman DNA. A 5' probe lacking the leader sequence identified the same components as the intact 5' probe, suggesting that VH coding regions distant as the reptilians may possess multiple genetic components which exhibit significant homology with murine immunoglobulin in VH regions. Images PMID:6285298
Demonstration of Hybrid DSMC-CFD Capability for Nonequilibrium Reacting Flow
2018-02-09
Lens-XX facility. This flow was chosen since a recent blind-code validation exercise revealed differences in CFD predictions and experimental data... experimental data that could be due to rarefied flow effects. The CFD solutions (using the US3D code) were run with no-slip boundary conditions and with...excellent agreement with that predicted by CFD. This implies that the dif- ference between CFD predictions and experimental data is not due to rarefied
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Turan, A.
1985-01-01
The hybrid-upwind finite difference schemes employed in generally available combustor codes possess excessive numerical diffusion errors which preclude accurate quantative calculations. The present study has as its primary objective the identification and assessment of an improved solution algorithm as well as discretization schemes applicable to analysis of turbulent viscous recirculating flows. The assessment is carried out primarily in two dimensional/axisymetric geometries with a view to identifying an appropriate technique to be incorporated in a three-dimensional code.
Schroeder, H; Hoeltken, A M; Fladung, M
2012-03-01
Within the genus Populus several species belonging to different sections are cross-compatible. Hence, high numbers of interspecies hybrids occur naturally and, additionally, have been artificially produced in huge breeding programmes during the last 100 years. Therefore, determination of a single poplar species, used for the production of 'multi-species hybrids' is often difficult, and represents a great challenge for the use of molecular markers in species identification. Within this study, over 20 chloroplast regions, both intergenic spacers and coding regions, have been tested for their ability to differentiate different poplar species using 23 already published barcoding primer combinations and 17 newly designed primer combinations. About half of the published barcoding primers yielded amplification products, whereas the new primers designed on the basis of the total sequenced cpDNA genome of Populus trichocarpa Torr. & Gray yielded much higher amplification success. Intergenic spacers were found to be more variable than coding regions within the genus Populus. The highest discrimination power of Populus species was found in the combination of two intergenic spacers (trnG-psbK, psbK-psbl) and the coding region rpoC. In barcoding projects, the coding regions matK and rbcL are often recommended, but within the genus Populus they only show moderate variability and are not efficient in species discrimination. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
NASA Technical Reports Server (NTRS)
Quinlan, Jesse R.; Gern, Frank H.
2016-01-01
Simultaneously achieving the fuel consumption and noise reduction goals set forth by NASA's Environmentally Responsible Aviation (ERA) project requires innovative and unconventional aircraft concepts. In response, advanced hybrid wing body (HWB) aircraft concepts have been proposed and analyzed as a means of meeting these objectives. For the current study, several HWB concepts were analyzed using the Hybrid wing body Conceptual Design and structural optimization (HCDstruct) analysis code. HCDstruct is a medium-fidelity finite element based conceptual design and structural optimization tool developed to fill the critical analysis gap existing between lower order structural sizing approaches and detailed, often finite element based sizing methods for HWB aircraft concepts. Whereas prior versions of the tool used a half-model approach in building the representative finite element model, a full wing-tip-to-wing-tip modeling capability was recently added to HCDstruct, which alleviated the symmetry constraints at the model centerline in place of a free-flying model and allowed for more realistic center body, aft body, and wing loading and trim response. The latest version of HCDstruct was applied to two ERA reference cases, including the Boeing Open Rotor Engine Integration On an HWB (OREIO) concept and the Boeing ERA-0009H1 concept, and results agreed favorably with detailed Boeing design data and related Flight Optimization System (FLOPS) analyses. Following these benchmark cases, HCDstruct was used to size NASA's ERA HWB concepts and to perform a related scaling study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Guozhang; Xiang, Nong; Huang, Yueheng
2016-01-15
The propagation and mode conversion of lower hybrid waves in an inhomogeneous plasma are investigated by using the nonlinear δf algorithm in a two-dimensional particle-in-cell simulation code based on the gyrokinetic electron and fully kinetic ion (GeFi) scheme [Lin et al., Plasma Phys. Controlled Fusion 47, 657 (2005)]. The characteristics of the simulated waves, such as wavelength, frequency, phase, and group velocities, agree well with the linear theoretical analysis. It is shown that a significant reflection component emerges in the conversion process between the slow mode and the fast mode when the scale length of the density variation is comparablemore » to the local wavelength. The dependences of the reflection coefficient on the scale length of the density variation are compared with the results based on the linear full wave model for cold plasmas. It is indicated that the mode conversion for the waves with a frequency of 2.45 GHz (ω ∼ 3ω{sub LH}, where ω{sub LH} represents the lower hybrid resonance) and within Tokamak relevant amplitudes can be well described in the linear scheme. As the frequency decreases, the modification due to the nonlinear term becomes important. For the low-frequency waves (ω ∼ 1.3ω{sub LH}), the generations of the high harmonic modes and sidebands through nonlinear mode-mode coupling provide new power channels and thus could reduce the reflection significantly.« less
Yu, Shihui; Kielt, Matthew; Stegner, Andrew L; Kibiryeva, Nataliya; Bittel, Douglas C; Cooley, Linda D
2009-12-01
The American College of Medical Genetics guidelines for microarray analysis for constitutional cytogenetic abnormalities require abnormal or ambiguous results from microarray-based comparative genomic hybridization (aCGH) analysis be confirmed by an alternative method. We employed quantitative real-time polymerase chain reaction (qPCR) technology using SYBR Green I reagents for confirmation of 93 abnormal aCGH results (50 deletions and 43 duplications) and 54 parental samples. A novel qPCR protocol using DNA sequences coding for X-linked lethal diseases in males for designing reference primers was established. Of the 81 sets of test primers used for confirmation of 93 abnormal copy number variants (CNVs) in 80 patients, 71 sets worked after the initial primer design (88%), 9 sets were redesigned once, and 1 set twice because of poor amplification. Fifty-four parental samples were tested using 33 sets of test primers to follow up 34 CNVs in 30 patients. Nineteen CNVs were confirmed as inherited, 13 were negative in both parents, and 2 were inconclusive due to a negative result in a single parent. The qPCR assessment clarified aCGH results in two cases and corrected a fluorescence in situ hybridization result in one case. Our data illustrate that qPCR methodology using SYBR Green I reagents is accurate, highly sensitive, specific, rapid, and cost-effective for verification of chromosomal imbalances detected by aCGH in the clinical setting.
Quantum-dot-tagged microbeads for multiplexed optical coding of biomolecules.
Han, M; Gao, X; Su, J Z; Nie, S
2001-07-01
Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots (zinc sulfide-capped cadmium selenide nanocrystals) into polymeric microbeads at precisely controlled ratios. Their novel optical properties (e.g., size-tunable emission and simultaneous excitation) render these highly luminescent quantum dots (QDs) ideal fluorophores for wavelength-and-intensity multiplexing. The use of 10 intensity levels and 6 colors could theoretically code one million nucleic acid or protein sequences. Imaging and spectroscopic measurements indicate that the QD-tagged beads are highly uniform and reproducible, yielding bead identification accuracies as high as 99.99% under favorable conditions. DNA hybridization studies demonstrate that the coding and target signals can be simultaneously read at the single-bead level. This spectral coding technology is expected to open new opportunities in gene expression studies, high-throughput screening, and medical diagnostics.
Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maggiora, R.; Milanesio, D.; Vecchi, G.
2009-11-26
TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less
Dynamic Hybrid Simulation of the Lunar Wake During ARTEMIS Crossing
NASA Astrophysics Data System (ADS)
Wiehle, S.; Plaschke, F.; Angelopoulos, V.; Auster, H.; Glassmeier, K.; Kriegel, H.; Motschmann, U. M.; Mueller, J.
2010-12-01
The interaction of the highly dynamic solar wind with the Moon is simulated with the A.I.K.E.F. (Adaptive Ion Kinetic Electron Fluid) code for the ARTEMIS P1 flyby on February 13, 2010. The A.I.K.E.F. hybrid plasma simulation code is the improved version of the Braunschweig code. It is able to automatically increase simulation grid resolution in areas of interest during runtime, which greatly increases resolution as well as performance. As the Moon has no intrinsic magnetic field and no ionosphere, the solar wind particles are absorbed at its surface, resulting in the formation of the lunar wake at the nightside. The solar wind magnetic field is basically convected through the Moon and the wake is slowly filled up with solar wind particles. However, this interaction is strongly influenced by the highly dynamic solar wind during the flyby. This is considered by a dynamic variation of the upstream conditions in the simulation using OMNI solar wind measurement data. By this method, a very good agreement between simulation and observations is achieved. The simulations show that the stationary structure of the lunar wake constitutes a tableau vivant in space representing the well-known Friedrichs diagram for MHD waves.
Gu, Joyce Xiuweu-Xu; Wei, Michael Yang; Rao, Pulivarthi H.; Lau, Ching C.; Behl, Sanjiv; Man, Tsz-Kwong
2007-01-01
With the increasing application of various genomic technologies in biomedical research, there is a need to integrate these data to correlate candidate genes/regions that are identified by different genomic platforms. Although there are tools that can analyze data from individual platforms, essential software for integration of genomic data is still lacking. Here, we present a novel Java-based program called CGI (Cytogenetics-Genomics Integrator) that matches the BAC clones from array-based comparative genomic hybridization (aCGH) to genes from RNA expression profiling datasets. The matching is computed via a fast, backend MySQL database containing UCSC Genome Browser annotations. This program also provides an easy-to-use graphical user interface for visualizing and summarizing the correlation of DNA copy number changes and RNA expression patterns from a set of experiments. In addition, CGI uses a Java applet to display the copy number values of a specific BAC clone in aCGH experiments side by side with the expression levels of genes that are mapped back to that BAC clone from the microarray experiments. The CGI program is built on top of extensible, reusable graphic components specifically designed for biologists. It is cross-platform compatible and the source code is freely available under the General Public License. PMID:19936083
Gu, Joyce Xiuweu-Xu; Wei, Michael Yang; Rao, Pulivarthi H; Lau, Ching C; Behl, Sanjiv; Man, Tsz-Kwong
2007-10-06
With the increasing application of various genomic technologies in biomedical research, there is a need to integrate these data to correlate candidate genes/regions that are identified by different genomic platforms. Although there are tools that can analyze data from individual platforms, essential software for integration of genomic data is still lacking. Here, we present a novel Java-based program called CGI (Cytogenetics-Genomics Integrator) that matches the BAC clones from array-based comparative genomic hybridization (aCGH) to genes from RNA expression profiling datasets. The matching is computed via a fast, backend MySQL database containing UCSC Genome Browser annotations. This program also provides an easy-to-use graphical user interface for visualizing and summarizing the correlation of DNA copy number changes and RNA expression patterns from a set of experiments. In addition, CGI uses a Java applet to display the copy number values of a specific BAC clone in aCGH experiments side by side with the expression levels of genes that are mapped back to that BAC clone from the microarray experiments. The CGI program is built on top of extensible, reusable graphic components specifically designed for biologists. It is cross-platform compatible and the source code is freely available under the General Public License.
NASA Astrophysics Data System (ADS)
Menezes, Marcos; Capaz, Rodrigo
Black Phosphorus (BP) is a promising material for applications in electronics, especially due to the tuning of its band gap by increasing the number of layers. In single-layer BP, also called Phosphorene, the P atoms form two staggered chains bonded by sp3 hybridization, while neighboring layers are bonded by Van-der-Waals interactions. In this work, we present a Tight-Binding (TB) parametrization of the electronic structure of single and few-layer BP, based on the Slater-Koster model within the two-center approximation. Our model includes all 3s and 3p orbitals, which makes this problem more complex than that of graphene, where only 2pz orbitals are needed for most purposes. The TB parameters are obtained from a least-squares fit of DFT calculations carried on the SIESTA code. We compare the results for different basis-sets used to expand the ab-initio wavefunctions and discuss their applicability. Our model can fit a larger number of bands than previously reported calculations based on Wannier functions. Moreover, our parameters have a clear physical interpretation based on chemical bonding. As such, we expect our results to be useful in a further understanding of multilayer BP and other 2D-materials characterized by strong sp3 hybridization. CNPq, FAPERJ, INCT-Nanomateriais de Carbono.
BiRen: predicting enhancers with a deep-learning-based model using the DNA sequence alone.
Yang, Bite; Liu, Feng; Ren, Chao; Ouyang, Zhangyi; Xie, Ziwei; Bo, Xiaochen; Shu, Wenjie
2017-07-01
Enhancer elements are noncoding stretches of DNA that play key roles in controlling gene expression programmes. Despite major efforts to develop accurate enhancer prediction methods, identifying enhancer sequences continues to be a challenge in the annotation of mammalian genomes. One of the major issues is the lack of large, sufficiently comprehensive and experimentally validated enhancers for humans or other species. Thus, the development of computational methods based on limited experimentally validated enhancers and deciphering the transcriptional regulatory code encoded in the enhancer sequences is urgent. We present a deep-learning-based hybrid architecture, BiRen, which predicts enhancers using the DNA sequence alone. Our results demonstrate that BiRen can learn common enhancer patterns directly from the DNA sequence and exhibits superior accuracy, robustness and generalizability in enhancer prediction relative to other state-of-the-art enhancer predictors based on sequence characteristics. Our BiRen will enable researchers to acquire a deeper understanding of the regulatory code of enhancer sequences. Our BiRen method can be freely accessed at https://github.com/wenjiegroup/BiRen . shuwj@bmi.ac.cn or boxc@bmi.ac.cn. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Ruffing, T; Huchzermeier, P; Muhm, M; Winkler, H
2014-05-01
Precise coding is an essential requirement in order to generate a valid DRG. The aim of our study was to evaluate the quality of the initial coding of surgical procedures, as well as to introduce our "hybrid model" of a surgical specialist supervising medical coding and a nonphysician for case auditing. The department's DRG responsible physician as a surgical specialist has profound knowledge both in surgery and in DRG coding. At a Level 1 hospital, 1000 coded cases of surgical procedures were checked. In our department, the DRG responsible physician who is both a surgeon and encoder has proven itself for many years. The initial surgical DRG coding had to be corrected by the DRG responsible physician in 42.2% of cases. On average, one hour per working day was necessary. The implementation of a DRG responsible physician is a simple, effective way to connect medical and business expertise without interface problems. Permanent feedback promotes both medical and economic sensitivity for the improvement of coding quality.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Johnson, Perry; Lee, Choonsik; Johnson, Kevin; Siragusa, Daniel; Bolch, Wesley E.
2009-06-01
In this study, the influence of patient size on organ and effective dose conversion coefficients (DCCs) was investigated for a representative interventional fluoroscopic procedure—cardiac catheterization. The study was performed using hybrid phantoms representing an underweight, average and overweight American adult male. Reference body sizes were determined using the NHANES III database and parameterized based on standing height and total body mass. Organ and effective dose conversion coefficients were calculated for anterior-posterior, posterior-anterior, left anterior oblique and right anterior oblique projections using the Monte Carlo code MCNPX 2.5.0 with the metric dose area product being used as the normalization factor. Results show body size to have a clear influence on DCCs which increased noticeably when body size decreased. It was also shown that if patient size is neglected when choosing a DCC, the organ and effective dose will be underestimated to an underweight patient and will be overestimated to an underweight patient, with errors as large as 113% for certain projections. Results were further compared with those published for a KTMAN-2 Korean patient-specific tomographic phantom. The published DCCs aligned best with the hybrid phantom which most closely matched in overall body size. These results highlighted the need for and the advantages of phantom-patient matching, and it is recommended that hybrid phantoms be used to create a more diverse library of patient-dependent anthropomorphic phantoms for medical dose reconstruction.
Modeling of photon migration in the human lung using a finite volume solver
NASA Astrophysics Data System (ADS)
Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.
2006-02-01
The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).
Sounds of silence: synonymous nucleotides as a key to biological regulation and complexity
Shabalina, Svetlana A.; Spiridonov, Nikolay A.; Kashina, Anna
2013-01-01
Messenger RNA is a key component of an intricate regulatory network of its own. It accommodates numerous nucleotide signals that overlap protein coding sequences and are responsible for multiple levels of regulation and generation of biological complexity. A wealth of structural and regulatory information, which mRNA carries in addition to the encoded amino acid sequence, raises the question of how these signals and overlapping codes are delineated along non-synonymous and synonymous positions in protein coding regions, especially in eukaryotes. Silent or synonymous codon positions, which do not determine amino acid sequences of the encoded proteins, define mRNA secondary structure and stability and affect the rate of translation, folding and post-translational modifications of nascent polypeptides. The RNA level selection is acting on synonymous sites in both prokaryotes and eukaryotes and is more common than previously thought. Selection pressure on the coding gene regions follows three-nucleotide periodic pattern of nucleotide base-pairing in mRNA, which is imposed by the genetic code. Synonymous positions of the coding regions have a higher level of hybridization potential relative to non-synonymous positions, and are multifunctional in their regulatory and structural roles. Recent experimental evidence and analysis of mRNA structure and interspecies conservation suggest that there is an evolutionary tradeoff between selective pressure acting at the RNA and protein levels. Here we provide a comprehensive overview of the studies that define the role of silent positions in regulating RNA structure and processing that exert downstream effects on proteins and their functions. PMID:23293005
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions
Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.
2014-01-01
Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498
Hybrid RAID With Dual Control Architecture for SSD Reliability
NASA Astrophysics Data System (ADS)
Chatterjee, Santanu
2010-10-01
The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.
NASA Astrophysics Data System (ADS)
Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.
2017-10-01
The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.
Traverse, Charles C.
2017-01-01
ABSTRACT Advances in sequencing technologies have enabled direct quantification of genome-wide errors that occur during RNA transcription. These errors occur at rates that are orders of magnitude higher than rates during DNA replication, but due to technical difficulties such measurements have been limited to single-base substitutions and have not yet quantified the scope of transcription insertions and deletions. Previous reporter gene assay findings suggested that transcription indels are produced exclusively by elongation complex slippage at homopolymeric runs, so we enumerated indels across the protein-coding transcriptomes of Escherichia coli and Buchnera aphidicola, which differ widely in their genomic base compositions and incidence of repeat regions. As anticipated from prior assays, transcription insertions prevailed in homopolymeric runs of A and T; however, transcription deletions arose in much more complex sequences and were rarely associated with homopolymeric runs. By reconstructing the relocated positions of the elongation complex as inferred from the sequences inserted or deleted during transcription, we show that continuation of transcription after slippage hinges on the degree of nucleotide complementarity within the RNA:DNA hybrid at the new DNA template location. PMID:28851848
Current/Pressure Profile Effects on Tearing Mode Stability in DIII-D Hybrid Discharges
NASA Astrophysics Data System (ADS)
Kim, K.; Park, J. M.; Murakami, M.; La Haye, R. J.; Na, Yong-Su
2015-11-01
It is important to understand the onset threshold and the evolution of tearing modes (TMs) for developing a high-performance steady state fusion reactor. As initial and basic comparisons to determine TM onset, the measured plasma profiles (such as temperature, density, rotation) were compared with the calculated current profiles between a pair of discharges with/without n=1 mode based on the database for DIII-D hybrid plasmas. The profiles were not much different, but the details were analyzed to determine their characteristics, especially near the rational surface. The tearing stability index calculated from PEST3, Δ' tends to increase rapidly just before the n=1 mode onset for these cases. The modeled equilibrium with varying pressure or current profiles parametrically based on the reference discharge is reconstructed for checking the onset dependency on Δ' or neoclassical effects such as bootstrap current. Simulations of TMs with the modeled equilibrium using resistive MHD codes will also be presented and compared with experiments to determine the sensibility for predicting TM onset. Work supported by US DOE under DE-FC02-04ER54698 and DE-AC52-07NA27344.
Towards universal hybrid star formation rate estimators
NASA Astrophysics Data System (ADS)
Boquien, M.; Kennicutt, R.; Calzetti, D.; Dale, D.; Galametz, M.; Sauvage, M.; Croxall, K.; Draine, B.; Kirkpatrick, A.; Kumari, N.; Hunt, L.; De Looze, I.; Pellegrini, E.; Relaño, M.; Smith, J.-D.; Tabatabaei, F.
2016-06-01
Context. To compute the star formation rate (SFR) of galaxies from the rest-frame ultraviolet (UV), it is essential to take the obscuration by dust into account. To do so, one of the most popular methods consists in combining the UV with the emission from the dust itself in the infrared (IR). Yet, different studies have derived different estimators, showing that no such hybrid estimator is truly universal. Aims: In this paper we aim at understanding and quantifying what physical processes fundamentally drive the variations between different hybrid estimators. In so doing, we aim at deriving new universal UV+IR hybrid estimators to correct the UV for dust attenuation at local and global scales, taking the intrinsic physical properties of galaxies into account. Methods: We use the CIGALE code to model the spatially resolved far-UV to far-IR spectral energy distributions of eight nearby star-forming galaxies drawn from the KINGFISH sample. This allows us to determine their local physical properties, and in particular their UV attenuation, average SFR, average specific SFR (sSFR), and their stellar mass. We then examine how hybrid estimators depend on said properties. Results: We find that hybrid UV+IR estimators strongly depend on the stellar mass surface density (in particular at 70 μm and 100 μm) and on the sSFR (in particular at 24 μm and the total infrared). Consequently, the IR scaling coefficients for UV obscuration can vary by almost an order of magnitude: from 1.55 to 13.45 at 24 μm for instance. This result contrasts with other groups who found relatively constant coefficients with small deviations. We exploit these variations to construct a new class of adaptative hybrid estimators based on observed UV to near-IR colours and near-IR luminosity densities per unit area. We find that they can reliably be extended to entire galaxies. Conclusions: The new estimators provide better estimates of attenuation-corrected UV emission than classical hybrid estimators published in the literature. Taking naturally variable impact of dust heated by old stellar populations into account, they constitute an important step towards universal estimators.
CSEM-Steel hybrid wiggler/undulator magnetic field studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halbach, K.; Hoyer, E.; Marks, S.
1985-06-01
Current design of permanent magnet wiggler/undulators use either pure charge sheet equivalent material (CSEM) or the CSEM-Steel hybrid configuration. Hybrid configurations offer higher field strength at small gaps, field distributions dominated by the pole surfaces and pole tuning. Nominal performance of the hybrid is generally predicted using a 2-D magnetic design code neglecting transverse geometry. Magnetic measurements are presented showing transverse configuration influence on performance, from a combination of models using CSEMs, REC (H/sub c/ = 9.2 KOe) and NdFe (H/sub c/ = 10.7 kOe), different pole widths and end configurations. Results show peak field improvement using NdFe in placemore » of REC in identical models, gap peak field decrease with pole width decrease (all results less than computed 2-D fields), transverse gap field distributions, and importance of CSEM material overhanging the poles in the transverse direction for highest gap fields. 3 refs., 6 figs.« less
Constructing biomolecular motor-powered hybrid NEMS devices
NASA Astrophysics Data System (ADS)
Bachand, George D.; Montemagno, Carlo D.
1999-10-01
The recognition of many enzymes as nanoscale molecular motors has allowed for the potential creation of hybrid organic/inorganic nano-electro-mechanical (NEMS) devices. The long-range goal of this research is the integration of F1-ATPase with NEMS to produce useful nanoscale devices. A thermostable F1-ATPase coding sequence has been isolated, cloned, and engineered for high-level protein expression. Precise positioning, spacing, and orientation of single F1-ATPase molecules were achieved using patterned nickel arrays. An efficient, accurate, and adaptable assay was developed to assess the performance of single F1- ATPase motors, and confirmed a three-step mechanism of (gamma) subunit rotation during ATP hydrolysis. Further evaluation of the bioengineering and biophysical properties of F1-ATPase currently are being conducted, as well as the construction of an F1-ATPase-powered, hybrid NEMS device. The evolution of this technology will permit the creation of novel classes of nanoscale, hybrid devices.
An efficient hybrid technique in RCS predictions of complex targets at high frequencies
NASA Astrophysics Data System (ADS)
Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe
2017-09-01
Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.
Chromosomal localization of the human V3 pituitary vasopressin receptor gene (AVPR3) to 1q32
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rousseau-Merck, M.F.; Derre, J.; Berger, R.
1995-11-20
Vasopressin exerts its physiological effects on liver metabolism, fluid osmolarity, and corticotrophic response to stress through a set of at least three receptors, V1a, V2, and V3 (also called V1b), respectively. These receptors constitute a distinct group of the superfamily of G-protein-coupled cell surface receptors. When bound to vasopressin, they couple to G proteins activating phospholipase C for the V1a and V3 types and adenylate cyclase for the V2. The vasopressin receptor subfamily also includes the receptor for oxytocin, a structurally related hormone that signals through the activation of phospholipase C. The chromosomal position of the V2 receptor gene hasmore » been assigned to Xq28-qter by PCR-based screening of somatic cell hybrids, whereas the oxytocin receptor gene has been mapped to chromosome 3q26.2 by fluorescence in situ hybridization (FISH). The chromosomal location of the V1a gene is currently unknown. We recently cloned the cDNA and the gene coding for the human pituitary-specific V3 receptor (HGMW-approved symbol AVPR3). We report here the chromosomal localization of this gene by two distinct in situ hybridization techniques using radioactive and fluorescent probes. 11 refs., 1 fig.« less
Qin, QinBo; Wang, Juan; Wang, YuDe; Liu, Yun; Liu, ShaoJun
2015-03-13
The offspring with 100 chromosomes (abbreviated as GRCC) have been obtained in the first generation of Carassius auratus red var. (abbreviated as RCC, 2n = 100) (♀) × Megalobrama amblycephala (abbreviated as BSB, 2n = 48) (♂), in which the females and unexpected males both are found. Chromosomal and karyotypic analysis has been reported in GRCC which gynogenesis origin has been suggested, but lack genetic evidence. Fluorescence in situ hybridization with species-specific centromere probes directly proves that GRCC possess two sets of RCC-derived chromosomes. Sequence analysis of the coding region (5S) and adjacent nontranscribed spacer (abbreviated as NTS) reveals that three types of 5S rDNA class (class I; class II and class III) in GRCC are completely inherited from their female parent (RCC), and show obvious base variations and insertions-deletions. Fluorescence in situ hybridization with the entire 5S rDNA probe reveals obvious chromosomal loci (class I and class II) variation in GRCC. This paper provides directly genetic evidence that GRCC is gynogenesis origin. In addition, our result is also reveals that distant hybridization inducing gynogenesis can lead to sequence and partial chromosomal loci of 5S rDNA gene obvious variation.
RNAiFold: a web server for RNA inverse folding and molecular design.
Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan
2013-07-01
Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.
Bidard, Frédérique; Imbeaud, Sandrine; Reymond, Nancie; Lespinet, Olivier; Silar, Philippe; Clavé, Corinne; Delacroix, Hervé; Berteaux-Lecellier, Véronique; Debuchy, Robert
2010-06-18
The development of new microarray technologies makes custom long oligonucleotide arrays affordable for many experimental applications, notably gene expression analyses. Reliable results depend on probe design quality and selection. Probe design strategy should cope with the limited accuracy of de novo gene prediction programs, and annotation up-dating. We present a novel in silico procedure which addresses these issues and includes experimental screening, as an empirical approach is the best strategy to identify optimal probes in the in silico outcome. We used four criteria for in silico probe selection: cross-hybridization, hairpin stability, probe location relative to coding sequence end and intron position. This latter criterion is critical when exon-intron gene structure predictions for intron-rich genes are inaccurate. For each coding sequence (CDS), we selected a sub-set of four probes. These probes were included in a test microarray, which was used to evaluate the hybridization behavior of each probe. The best probe for each CDS was selected according to three experimental criteria: signal-to-noise ratio, signal reproducibility, and representative signal intensities. This procedure was applied for the development of a gene expression Agilent platform for the filamentous fungus Podospora anserina and the selection of a single 60-mer probe for each of the 10,556 P. anserina CDS. A reliable gene expression microarray version based on the Agilent 44K platform was developed with four spot replicates of each probe to increase statistical significance of analysis.
Impact of MPEG-4 3D mesh coding on watermarking algorithms for polygonal 3D meshes
NASA Astrophysics Data System (ADS)
Funk, Wolfgang
2004-06-01
The MPEG-4 multimedia standard addresses the scene-based composition of audiovisual objects. Natural and synthetic multimedia content can be mixed and transmitted over narrow and broadband communication channels. Synthetic natural hybrid coding (SNHC) within MPEG-4 provides tools for 3D mesh coding (3DMC). We investigate the robustness of two different 3D watermarking algorithms for polygonal meshes with respect to 3DMC. The first algorithm is a blind detection scheme designed for labelling applications that require high bandwidth and low robustness. The second algorithm is a robust non-blind one-bit watermarking scheme intended for copyright protection applications. Both algorithms have been proposed by Benedens. We expect 3DMC to have an impact on the watermarked 3D meshes, as the algorithms used for our simulations work on vertex coordinates to encode the watermark. We use the 3DMC implementation provided with the MPEG-4 reference software and the Princeton Shape Benchmark model database for our simulations. The watermarked models are sent through the 3DMC encoder and decoder, and the watermark decoding process is performed. For each algorithm under consideration we examine the detection properties as a function of the quantization of the vertex coordinates.
Hybrid Gibbs Sampling and MCMC for CMB Analysis at Small Angular Scales
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; Wandelt, B. D.; Gorski, K. M.; Huey, G.; O'Dwyer, I. J.; Dickinson, C.; Banday, A. J.; Lawrence, C. R.
2008-01-01
A) Gibbs Sampling has now been validated as an efficient, statistically exact, and practically useful method for "low-L" (as demonstrated on WMAP temperature polarization data). B) We are extending Gibbs sampling to directly propagate uncertainties in both foreground and instrument models to total uncertainty in cosmological parameters for the entire range of angular scales relevant for Planck. C) Made possible by inclusion of foreground model parameters in Gibbs sampling and hybrid MCMC and Gibbs sampling for the low signal to noise (high-L) regime. D) Future items to be included in the Bayesian framework include: 1) Integration with Hybrid Likelihood (or posterior) code for cosmological parameters; 2) Include other uncertainties in instrumental systematics? (I.e. beam uncertainties, noise estimation, calibration errors, other).
User's Manual for FEM-BEM Method. 1.0
NASA Technical Reports Server (NTRS)
Butler, Theresa; Deshpande, M. D. (Technical Monitor)
2002-01-01
A user's manual for using FORTRAN code to perform electromagnetic analysis of arbitrarily shaped material cylinders using a hybrid method that combines the finite element method (FEM) and the boundary element method (BEM). In this method, the material cylinder is enclosed by a fictitious boundary and the Maxwell's equations are solved by FEM inside the boundary and by BEM outside the boundary. The electromagnetic scattering on several arbitrarily shaped material cylinders using this FORTRAN code is computed to as examples.
1981-09-21
acknowledge and thank A. R. Hislop and D. L. Saul, Code 9262, for their work on tbh mixer design and D. L. Chappelle and K. S. Maynard, Code 8124, for...MTT-28, p 555-563, June 1980 . 33 (a) Mixer matrix, 7 boards, 6-5 mixers, 1-7 mixers. (b) LO power split to boards, 7-way. rN’ o, Ro (c) N-way power...1966. 9. Saleh, A.A.M., Planar Electrically Symmetric N-Way Hybrid Power Dividers/ Combiners, IEEE T-MTT-28, p 555-563, June 1980 . 55
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Lang, Jianying; Ku, S.; Chen, Y.; Parker, S. E.; Adams, M. F.
2017-01-01
As an alternative option to kinetic electrons, the gyrokinetic total-f particle-in-cell (PIC) code XGC1 has been extended to the MHD/fluid type electromagnetic regime by combining gyrokinetic PIC ions with massless drift-fluid electrons analogous to Chen and Parker [Phys. Plasmas 8, 441 (2001)]. Two representative long wavelength modes, shear Alfvén waves and resistive tearing modes, are verified in cylindrical and toroidal magnetic field geometries. PMID:29104419
Numerical simulation of MPD thruster flows with anomalous transport
NASA Technical Reports Server (NTRS)
Caldo, Giuliano; Choueiri, Edgar Y.; Kelly, Arnold J.; Jahn, Robert G.
1992-01-01
Anomalous transport effects in an Ar self-field coaxial MPD thruster are presently studied by means of a fully 2D two-fluid numerical code; its calculations are extended to a range of typical operating conditions. An effort is made to compare the spatial distribution of the steady state flow and field properties and thruster power-dissipation values for simulation runs with and without anomalous transport. A conductivity law based on the nonlinear saturation of lower hybrid current-driven instability is used for the calculations. Anomalous-transport simulation runs have indicated that the resistivity in specific areas of the discharge is significantly higher than that calculated in classical runs.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
Aspects of GPU perfomance in algorithms with random memory access
NASA Astrophysics Data System (ADS)
Kashkovsky, Alexander V.; Shershnev, Anton A.; Vashchenkov, Pavel V.
2017-10-01
The numerical code for solving the Boltzmann equation on the hybrid computational cluster using the Direct Simulation Monte Carlo (DSMC) method showed that on Tesla K40 accelerators computational performance drops dramatically with increase of percentage of occupied GPU memory. Testing revealed that memory access time increases tens of times after certain critical percentage of memory is occupied. Moreover, it seems to be the common problem of all NVidia's GPUs arising from its architecture. Few modifications of the numerical algorithm were suggested to overcome this problem. One of them, based on the splitting the memory into "virtual" blocks, resulted in 2.5 times speed up.
NASA Astrophysics Data System (ADS)
Amiraux, Mathieu
Rotorcraft Blade-Vortex Interaction (BVI) remains one of the most challenging flow phenomenon to simulate numerically. Over the past decade, the HART-II rotor test and its extensive experimental dataset has been a major database for validation of CFD codes. Its strong BVI signature, with high levels of intrusive noise and vibrations, makes it a difficult test for computational methods. The main challenge is to accurately capture and preserve the vortices which interact with the rotor, while predicting correct blade deformations and loading. This doctoral dissertation presents the application of a coupled CFD/CSD methodology to the problem of helicopter BVI and compares three levels of fidelity for aerodynamic modeling: a hybrid lifting-line/free-wake (wake coupling) method, with modified compressible unsteady model; a hybrid URANS/free-wake method; and a URANS-based wake capturing method, using multiple overset meshes to capture the entire flow field. To further increase numerical correlation, three helicopter fuselage models are implemented in the framework. The first is a high resolution 3D GPU panel code; the second is an immersed boundary based method, with 3D elliptic grid adaption; the last one uses a body-fitted, curvilinear fuselage mesh. The main contribution of this work is the implementation and systematic comparison of multiple numerical methods to perform BVI modeling. The trade-offs between solution accuracy and computational cost are highlighted for the different approaches. Various improvements have been made to each code to enhance physical fidelity, while advanced technologies, such as GPU computing, have been employed to increase efficiency. The resulting numerical setup covers all aspects of the simulation creating a truly multi-fidelity and multi-physics framework. Overall, the wake capturing approach showed the best BVI phasing correlation and good blade deflection predictions, with slightly under-predicted aerodynamic loading magnitudes. However, it proved to be much more expensive than the other two methods. Wake coupling with RANS solver had very good loading magnitude predictions, and therefore good acoustic intensities, with acceptable computational cost. The lifting-line based technique often had over-predicted aerodynamic levels, due to the degree of empiricism of the model, but its very short run-times, thanks to GPU technology, makes it a very attractive approach.
Beam dynamic simulation and optimization of the CLIC positron source and the capture linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bayar, C., E-mail: cafer.bayar@cern.ch; CERN, Geneva; Doebert, S., E-mail: Steffen.Doebert@cern.ch
2016-03-25
The CLIC Positron Source is based on the hybrid target composed of a crystal and an amorphous target. Simulations have been performed from the exit of the amorphous target to the end of pre-injector linac which captures and accelerates the positrons to an energy of 200 MeV. Simulations are performed by the particle tracking code PARMELA. The magnetic field of the AMD is represented in PARMELA by simple coils. Two modes are applied in this study. The first one is accelerating mode based on acceleration after the AMD. The second one is decelerating mode based on deceleration in the first acceleratingmore » structure. It is shown that the decelerating mode gives a higher yield for the e{sup +} beam in the end of the Pre-Injector Linac.« less
Design of neurophysiologically motivated structures of time-pulse coded neurons
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Lobodzinska, Raisa F.
2009-04-01
The common methodology of biologically motivated concept of building of processing sensors systems with parallel input and picture operands processing and time-pulse coding are described in paper. Advantages of such coding for creation of parallel programmed 2D-array structures for the next generation digital computers which require untraditional numerical systems for processing of analog, digital, hybrid and neuro-fuzzy operands are shown. The optoelectronic time-pulse coded intelligent neural elements (OETPCINE) simulation results and implementation results of a wide set of neuro-fuzzy logic operations are considered. The simulation results confirm engineering advantages, intellectuality, circuit flexibility of OETPCINE for creation of advanced 2D-structures. The developed equivalentor-nonequivalentor neural element has power consumption of 10mW and processing time about 10...100us.
Colour-barcoded magnetic microparticles for multiplexed bioassays.
Lee, Howon; Kim, Junhoi; Kim, Hyoki; Kim, Jiyun; Kwon, Sunghoon
2010-09-01
Encoded particles have a demonstrated value for multiplexed high-throughput bioassays such as drug discovery and clinical diagnostics. In diverse samples, the ability to use a large number of distinct identification codes on assay particles is important to increase throughput. Proper handling schemes are also needed to readout these codes on free-floating probe microparticles. Here we create vivid, free-floating structural coloured particles with multi-axis rotational control using a colour-tunable magnetic material and a new printing method. Our colour-barcoded magnetic microparticles offer a coding capacity easily into the billions with distinct magnetic handling capabilities including active positioning for code readouts and active stirring for improved reaction kinetics in microscale environments. A DNA hybridization assay is done using the colour-barcoded magnetic microparticles to demonstrate multiplexing capabilities.
WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendygral, P. J.; Radcliffe, N.; Kandalla, K.
2017-02-01
We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less
Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture
NASA Astrophysics Data System (ADS)
Meng, Chunfang
2017-03-01
We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.
User's manual for CBS3DS, version 1.0
NASA Astrophysics Data System (ADS)
Reddy, C. J.; Deshpande, M. D.
1995-10-01
CBS3DS is a computer code written in FORTRAN 77 to compute the backscattering radar cross section of cavity backed apertures in infinite ground plane and slots in thick infinite ground plane. CBS3DS implements the hybrid Finite Element Method (FEM) and Method of Moments (MoM) techniques. This code uses the tetrahedral elements, with vector edge basis functions for FEM in the volume of the cavity/slot and the triangular elements with the basis functions for MoM at the apertures. By virtue of FEM, this code can handle any arbitrarily shaped three-dimensional cavities filled with inhomogeneous lossy materials; due to MoM, the apertures can be of any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computer the code is intended to run.
Systems biology by the rules: hybrid intelligent systems for pathway modeling and discovery.
Bosl, William J
2007-02-15
Expert knowledge in journal articles is an important source of data for reconstructing biological pathways and creating new hypotheses. An important need for medical research is to integrate this data with high throughput sources to build useful models that span several scales. Researchers traditionally use mental models of pathways to integrate information and development new hypotheses. Unfortunately, the amount of information is often overwhelming and these are inadequate for predicting the dynamic response of complex pathways. Hierarchical computational models that allow exploration of semi-quantitative dynamics are useful systems biology tools for theoreticians, experimentalists and clinicians and may provide a means for cross-communication. A novel approach for biological pathway modeling based on hybrid intelligent systems or soft computing technologies is presented here. Intelligent hybrid systems, which refers to several related computing methods such as fuzzy logic, neural nets, genetic algorithms, and statistical analysis, has become ubiquitous in engineering applications for complex control system modeling and design. Biological pathways may be considered to be complex control systems, which medicine tries to manipulate to achieve desired results. Thus, hybrid intelligent systems may provide a useful tool for modeling biological system dynamics and computational exploration of new drug targets. A new modeling approach based on these methods is presented in the context of hedgehog regulation of the cell cycle in granule cells. Code and input files can be found at the Bionet website: www.chip.ord/~wbosl/Software/Bionet. This paper presents the algorithmic methods needed for modeling complicated biochemical dynamics using rule-based models to represent expert knowledge in the context of cell cycle regulation and tumor growth. A notable feature of this modeling approach is that it allows biologists to build complex models from their knowledge base without the need to translate that knowledge into mathematical form. Dynamics on several levels, from molecular pathways to tissue growth, are seamlessly integrated. A number of common network motifs are examined and used to build a model of hedgehog regulation of the cell cycle in cerebellar neurons, which is believed to play a key role in the etiology of medulloblastoma, a devastating childhood brain cancer.
Fusion of Escherichia coli heat-stable enterotoxin and heat-labile enterotoxin B subunit.
Guzman-Verduzco, L M; Kupersztoch, Y M
1987-11-01
The 3' terminus of the DNA coding for the extracellular Escherichia coli heat-stable enterotoxin (ST) devoid of transcription and translation stop signals was fused to the 5' terminus of the DNA coding for the periplasmic B subunit of the heat-labile enterotoxin (LTB) deleted of ribosomal binding sites and leader peptide. By RNA-DNA hybridization analysis, it was shown that the fused DNA was transcribed in vivo into an RNA species in close agreement with the expected molecular weight inferred from the nucleotide sequence. The translation products of the fused DNA resulted in a hybrid molecule recognized in Western blots (immunoblots) with antibodies directed against the heat-labile moiety. Anti-LTB antibodies coupled to a solid support bound ST and LTB simultaneously when incubated with ST-LTB cellular extracts. By [35S]cysteine pulse-chase experiments, it was shown that the fused ST-LTB polypeptide was converted from a precursor with an equivalent electrophoretic mobility of 20,800 daltons to an approximately 18,500-dalton species, which accumulated within the cell. The data suggest that wild-type ST undergoes at least two processing steps during its export to the culture supernatant. Blocking the natural carboxy terminus of ST inhibited the second proteolytic step and extracellular delivery of the hybrid molecule.
Ning, S B; Wang, L; Song, Y C
2000-01-01
Peroxidase plays a key role in plant disease resistance, cold stress and some developmental processes, and cold-regulated protein functions necessarily in reaction of plants on cold or heat stress. Recent studies showed that these processes in plant cells were involved in programmed cell death (PCD). Using a biotin-labelled in situ hybridization (ISH) technique, we physically mapped the genes px and cld coding peroxidase and cold-regulated protein respectively onto maize chromosomes. Both DAB and fluorescence detection systems gave the identical results, the probe uaz235 corresponding to gene px was localized onto the long arm of chromosome 2 (2L) and 7L, and csu19 corresponding to gene cld was hybridized onto 4L and 5L. The percentage distances (from the hybridization sites to centromeres) of uaz235 in 2L and 7L were 45.4 +/- 1.3 and 67.4 +/- 3.7 respectively, and those of csu19 in 4L and 5L were 68.6 +/- 2.6 and 58.2 +/- 1.6 respectively. The physical positions of px in 2L and cld in 4L coincide with those in their genetic map pattern. The results also show that both of these genes have duplicated sites in maize genome.
Green Energy for the Battlefield
2007-12-01
Biodiesel, Ethanol, Natural Gas, Coal-Derived Liquid Fuels, Electricity , Greenhouse Gas, Emissions, Battlefield, Hybrid Vehicles 16. PRICE CODE 17...37 5. Electricity ............................................................................................38 C. CURRENT DOD RESEARCH AND...APPLICATIONS............................38 1. Coal-Derived Liquid Fuels – Assured Fuels Initiative ...................38 2. Electricity – Luke Air Force
Li, Haitao; Li, Juanjuan; Zhao, Bo; Wang, Jing; Yi, Licong; Liu, Chao; Wu, Jiangsheng; King, Graham J; Liu, Kede
2015-01-01
Identification and molecular analysis of four tribenuron-methyl resistant mutants in Brassica napus , which would be very useful in hybrid production using a Chemically induced male sterility system. Chemically induced male sterility (CIMS) systems dependent on chemical hybridization agents (CHAs) like tribenuron-methyl (TBM) represent an important approach for practical utilization of heterosis in rapeseed. However, when spraying the female parents with TBM to induce male sterility the male parents must be protected with a shield to avoid injury to the stamens, which would otherwise complicate the seed production protocol and increase the cost of hybrid seed production. Here we report the first proposed application of a herbicide-resistant cultivar in hybrid production, using a CIMS system based on identifying four TBM-resistant mutants in Brassica napus. Genetic analysis indicated that the TBM resistance was controlled by a single dominant nuclear gene. An in vitro enzyme activity assay for acetohydroxyacid synthase (AHAS) suggested that the herbicide resistance is caused by a gain-of-function mutation in a copy of AHAS genes. Comparative sequencing of the mutants and wild type BnaA.AHAS.a coding sequences identified a C-to-T transition at either position 535 or 536 from the translation start site, which resulted in a substitution of proline with serine or leucine at position 197 according to the Arabidopsis thaliana protein sequence. An allele-specific dCAPS marker developed from the C536T variation co-segregated with the herbicide resistance. Transgenic A. thaliana plants expressing BnaA.ahas3.a conferred herbicide resistance, which confirmed that the P197 substitution in BnaA.AHAS.a was responsible for the herbicide resistance. Moreover, the TBM-resistant lines maintain normal male fertility under TBM treatment and can be of practical value in hybrid seed production using CIMS.
Origins of the Human Genome Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook-Deegan, Robert
1993-07-01
The human genome project was borne of technology, grew into a science bureaucracy in the US and throughout the world, and is now being transformed into a hybrid academic and commercial enterprise. The next phase of the project promises to veer more sharply toward commercial application, harnessing both the technical prowess of molecular biology and the rapidly growing body of knowledge about DNA structure to the pursuit of practical benefits. Faith that the systematic analysis of DNA structure will prove to be a powerful research tool underlies the rationale behind the genome project. The notion that most genetic information ismore » embedded in the sequence of CNA base pairs comprising chromosomes is a central tenet. A rough analogy is to liken an organism's genetic code to computer code. The coal of the genome project, in this parlance, is to identify and catalog 75,000 or more files (genes) in the software that directs construction of a self-modifying and self-replicating system -- a living organism.« less
Avatar DNA Nanohybrid System in Chip-on-a-Phone
NASA Astrophysics Data System (ADS)
Park, Dae-Hwan; Han, Chang Jo; Shul, Yong-Gun; Choy, Jin-Ho
2014-05-01
Long admired for informational role and recognition function in multidisciplinary science, DNA nanohybrids have been emerging as ideal materials for molecular nanotechnology and genetic information code. Here, we designed an optical machine-readable DNA icon on microarray, Avatar DNA, for automatic identification and data capture such as Quick Response and ColorZip codes. Avatar icon is made of telepathic DNA-DNA hybrids inscribed on chips, which can be identified by camera of smartphone with application software. Information encoded in base-sequences can be accessed by connecting an off-line icon to an on-line web-server network to provide message, index, or URL from database library. Avatar DNA is then converged with nano-bio-info-cogno science: each building block stands for inorganic nanosheets, nucleotides, digits, and pixels. This convergence could address item-level identification that strengthens supply-chain security for drug counterfeits. It can, therefore, provide molecular-level vision through mobile network to coordinate and integrate data management channels for visual detection and recording.
Avatar DNA Nanohybrid System in Chip-on-a-Phone
Park, Dae-Hwan; Han, Chang Jo; Shul, Yong-Gun; Choy, Jin-Ho
2014-01-01
Long admired for informational role and recognition function in multidisciplinary science, DNA nanohybrids have been emerging as ideal materials for molecular nanotechnology and genetic information code. Here, we designed an optical machine-readable DNA icon on microarray, Avatar DNA, for automatic identification and data capture such as Quick Response and ColorZip codes. Avatar icon is made of telepathic DNA-DNA hybrids inscribed on chips, which can be identified by camera of smartphone with application software. Information encoded in base-sequences can be accessed by connecting an off-line icon to an on-line web-server network to provide message, index, or URL from database library. Avatar DNA is then converged with nano-bio-info-cogno science: each building block stands for inorganic nanosheets, nucleotides, digits, and pixels. This convergence could address item-level identification that strengthens supply-chain security for drug counterfeits. It can, therefore, provide molecular-level vision through mobile network to coordinate and integrate data management channels for visual detection and recording. PMID:24824876
MHD and resonant instabilities in JT-60SA during current ramp-up with off-axis N-NB injection
NASA Astrophysics Data System (ADS)
Bierwage, A.; Toma, M.; Shinohara, K.
2017-12-01
The excitation of magnetohydrodynamic (MHD) and resonant instabilities and their effect on the plasma profiles during the current ramp-up phase of a beam-driven JT-60SA tokamak plasma is studied using the MHD-PIC hybrid code MEGA. In the simple scenario considered, the plasma is only driven by one negative-ion-based neutral beam, depositing 500 keV deuterons at 5 MW power off-axis at about mid-radius. The beam injection starts half-way in the ramp-up phase. Within 1 s, the beam-driven plasma current and fast ion pressure produce a configuration that is strongly unstable to rapidly growing MHD and resonant modes. Using MEGA, modes with low toroidal mode numbers in the range n = 1-4 are examined in detail and shown to cause substantial changes in the plasma profiles. The necessity to develop reduced models and incorporate the effects of such instabilities in integrated codes used to simulate the evolution of entire plasma discharges is discussed.
Origins of the Human Genome Project
DOE R&D Accomplishments Database
Cook-Deegan, Robert (Affiliation: Institute of Medicine, National Academy of Sciences)
1993-07-01
The human genome project was borne of technology, grew into a science bureaucracy in the United States and throughout the world, and is now being transformed into a hybrid academic and commercial enterprise. The next phase of the project promises to veer more sharply toward commercial application, harnessing both the technical prowess of molecular biology and the rapidly growing body of knowledge about DNA structure to the pursuit of practical benefits. Faith that the systematic analysis of DNA structure will prove to be a powerful research tool underlies the rationale behind the genome project. The notion that most genetic information is embedded in the sequence of CNA base pairs comprising chromosomes is a central tenet. A rough analogy is to liken an organism's genetic code to computer code. The coal of the genome project, in this parlance, is to identify and catalog 75,000 or more files (genes) in the software that directs construction of a self-modifying and self-replicating system -- a living organism.
TOPLHA and ALOHA: comparison between Lower Hybrid wave coupling codes
NASA Astrophysics Data System (ADS)
Meneghini, Orso; Hillairet, J.; Goniche, M.; Bilato, R.; Voyer, D.; Parker, R.
2008-11-01
TOPLHA and ALOHA are wave coupling simulation tools for LH antennas. Both codes are able to account for realistic 3D antenna geometries and use a 1D plasma model. In the framework of a collaboration between MIT and CEA laboratories, the two codes have been extensively compared. In TOPLHA the EM problem is self consistently formulated by means of a set of multiple coupled integral equations having as domain the triangles of the meshed antenna surface. TOPLHA currently uses the FELHS code for modeling the plasma response. ALOHA instead uses a mode matching approach and its own plasma model. Comparisons have been done for several plasma scenarios on different antenna designs: an array of independent waveguides, a multi-junction antenna and a passive/active multi-junction antenna. When simulating the same geometry and plasma conditions the two codes compare remarkably well both for the reflection coefficients and for the launched spectra. The different approach of the two codes to solve the same problem strengthens the confidence in the final results.
Ojeda-May, Pedro; Nam, Kwangho
2017-08-08
The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).
Modelling of the EAST lower-hybrid current drive experiment using GENRAY/CQL3D and TORLH/CQL3D
NASA Astrophysics Data System (ADS)
Yang, C.; Bonoli, P. T.; Wright, J. C.; Ding, B. J.; Parker, R.; Shiraiwa, S.; Li, M. H.
2014-12-01
The coupled GENRAY-CQL3D code has been used to do systematic ray-tracing and Fokker-Planck analysis for EAST Lower Hybrid wave Current Drive (LHCD) experiments. Despite being in the weak absorption regime, the experimental level of LH current drive is successfully simulated, by taking into account the variations in the parallel wavenumber due to the toroidal effect. The effect of radial transport of the fast LH electrons in EAST has also been studied, which shows that a modest amount of radial transport diffusion can redistribute the fast LH current significantly. Taking advantage of the new capability in GENRAY, the actual Scrape Off Layer (SOL) model with magnetic field, density, temperature, and geometry is included in the simulation for both the lower and the higher density cases, so that the collisional losses of Lower Hybrid Wave (LHW) power in the SOL has been accounted for, which together with fast electron losses can reproduce the LHCD experimental observations in different discharges of EAST. We have also analyzed EAST discharges where there is a significant ohmic contribution to the total current, and good agreement with experiment in terms of total current has been obtained. Also, the full-wave code TORLH has been used for the simulation of the LH physics in the EAST, including full-wave effects such as diffraction and focusing which may also play an important role in bridging the spectral gap. The comparisons between the GENRAY and the TORLH codes are done for both the Maxwellian and the quasi-linear electron Landau damping cases. These simulations represent an important addition to the validation studies of the GENRAY-CQL3D and TORLH models being used in weak absorption scenarios of tokamaks with large aspect ratio.
An Italian network to improve hybrid rocket performance: Strategy and results
NASA Astrophysics Data System (ADS)
Galfetti, L.; Nasuti, F.; Pastrone, D.; Russo, A. M.
2014-03-01
The new international attention to hybrid space propulsion points out the need of a deeper understanding of physico-chemical phenomena controlling combustion process and fluid dynamics inside the motor. This research project has been carried on by a network of four Italian Universities; each of them being responsible for a specific topic. The task of Politecnico di Milano is an experimental activity concerning the study, development, manufacturing and characterization of advanced hybrid solid fuels with a high regression rate. The University of Naples is responsible for experimental activities focused on rocket motor scale characterization of the solid fuels developed and characterized at laboratory scale by Politecnico di Milano. The University of Rome has been studying the combustion chamber and nozzle of the hybrid rocket, defined in the coordinated program by advanced physical-mathematical models and numerical methods. Politecnico di Torino has been working on a multidisciplinary optimization code for optimal design of hybrid rocket motors, strongly related to the mission to be performed. The overall research project aims to increase the scientific knowledge of the combustion processes in hybrid rockets, using a strongly linked experimental-numerical approach. Methods and obtained results will be applied to implement a potential upgrade for the current generation of hybrid rocket motors. This paper presents the overall strategy, the organization, and the first experimental and numerical results of this joined effort to contribute to the development of improved hybrid propulsion systems.
Modeling Charge Collection in Detector Arrays
NASA Technical Reports Server (NTRS)
Hardage, Donna (Technical Monitor); Pickel, J. C.
2003-01-01
A detector array charge collection model has been developed for use as an engineering tool to aid in the design of optical sensor missions for operation in the space radiation environment. This model is an enhancement of the prototype array charge collection model that was developed for the Next Generation Space Telescope (NGST) program. The primary enhancements were accounting for drift-assisted diffusion by Monte Carlo modeling techniques and implementing the modeling approaches in a windows-based code. The modeling is concerned with integrated charge collection within discrete pixels in the focal plane array (FPA), with high fidelity spatial resolution. It is applicable to all detector geometries including monolithc charge coupled devices (CCDs), Active Pixel Sensors (APS) and hybrid FPA geometries based on a detector array bump-bonded to a readout integrated circuit (ROIC).
Overview of FAR-TECH's magnetic fusion energy research
NASA Astrophysics Data System (ADS)
Kim, Jin-Soo; Bogatu, I. N.; Galkin, S. A.; Spencer, J. Andrew; Svidzinski, V. A.; Zhao, L.
2017-10-01
FAR-TECH, Inc. has been working on magnetic fusion energy research over two-decades. During the years, we have developed unique approaches to help understanding the physics, and resolving issues in magnetic fusion energy. The specific areas of work have been in modeling RF waves in plasmas, MHD modeling and mode-identification, and nano-particle plasma jet and its application to disruption mitigation. Our research highlights in recent years will be presented with examples, specifically, developments of FullWave (Full Wave RF code), PMARS (Parallelized MARS code), and HEM (Hybrid ElectroMagnetic code). In addition, nano-particle plasma-jet (NPPJ) and its application for disruption mitigation will be presented. Work is supported by the U.S. DOE SBIR program.
Utilization of TRISO Fuel with LWR Spent Fuel in Fusion-Fission Hybrid Reactor System
NASA Astrophysics Data System (ADS)
Acır, Adem; Altunok, Taner
2010-10-01
HTRs use a high performance particulate TRISO fuel with ceramic multi-layer coatings due to the high burn up capability and very neutronic performance. TRISO fuel because of capable of high burn up and very neutronic performance is conducted in a D-T fusion driven hybrid reactor. In this study, TRISO fuels particles are imbedded body-centered cubic (BCC) in a graphite matrix with a volume fraction of 68%. The neutronic effect of TRISO coated LWR spent fuel in the fuel rod used hybrid reactor on the fuel performance has been investigated for Flibe, Flinabe and Li20Sn80 coolants. The reactor operation time with the different first neutron wall loads is 24 months. Neutron transport calculations are evaluated by using XSDRNPM/SCALE 5 codes with 238 group cross section library. The effect of TRISO coated LWR spent fuel in the fuel rod used hybrid reactor on tritium breeding (TBR), energy multiplication (M), fissile fuel breeding, average burn up values are comparatively investigated. It is shown that the high burn up can be achieved with TRISO fuel in the hybrid reactor.
Stable CoT-1 repeat RNA is abundant and associated with euchromatic interphase chromosomes
Hall, Lisa L.; Carone, Dawn M.; Gomez, Alvin; Kolpa, Heather J.; Byron, Meg; Mehta, Nitish; Fackelmayer, Frank O.; Lawrence, Jeanne B.
2014-01-01
SUMMARY Recent studies recognize a vast diversity of non-coding RNAs with largely unknown functions, but few have examined interspersed repeat sequences, which constitute almost half our genome. RNA hybridization in situ using CoT-1 (highly repeated) DNA probes detects surprisingly abundant euchromatin-associated RNA comprised predominantly of repeat sequences (“CoT-1 RNA”), including LINE-1. CoT-1-hybridizing RNA strictly localizes to the interphase chromosome territory in cis, and remains stably associated with the chromosome territory following prolonged transcriptional inhibition. The CoT-1 RNA territory resists mechanical disruption and fractionates with the non-chromatin scaffold, but can be experimentally released. Loss of repeat-rich, stable nuclear RNAs from euchromatin corresponds to aberrant chromatin distribution and condensation. CoT-1 RNA has several properties similar to XIST chromosomal RNA, but is excluded from chromatin condensed by XIST. These findings impact two “black boxes” of genome science: the poorly understood diversity of non-coding RNA and the unexplained abundance of repetitive elements. PMID:24581492
Magnet system optimization for segmented adaptive-gap in-vacuum undulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitegi, C., E-mail: ckitegi@bnl.gov; Chubar, O.; Eng, C.
2016-07-27
Segmented Adaptive Gap in-vacuum Undulator (SAGU), in which different segments have different gaps and periods, promises a considerable spectral performance gain over a conventional undulator with uniform gap and period. According to calculations, this gain can be comparable to the gain achievable with a superior undulator technology (e.g. a room-temperature in-vacuum hybrid SAGU would perform as a cryo-cooled hybrid in-vacuum undulator with uniform gap and period). However, for reaching the high spectral performance, SAGU magnetic design has to include compensation of kicks experienced by the electron beam at segment junctions because of different deflection parameter values in the segments. Wemore » show that such compensation to large extent can be accomplished by using a passive correction, however, simple correction coils are nevertheless required as well to reach perfect compensation over a whole SAGU tuning range. Magnetic optimizations performed with Radia code, and the resulting undulator radiation spectra calculated using SRW code, demonstrating a possibility of nearly perfect correction, are presented.« less
Fatigue Life Methodology for Tapered Hybrid Composite Flexbeams
NASA Technical Reports Server (NTRS)
urri, Gretchen B.; Schaff, Jeffery R.
2006-01-01
Nonlinear-tapered flexbeam specimens from a full-size composite helicopter rotor hub flexbeam were tested under combined constant axial tension and cyclic bending loads. Two different graphite/glass hybrid configurations tested under cyclic loading failed by delamination in the tapered region. A 2-D finite element model was developed which closely approximated the flexbeam geometry, boundary conditions, and loading. The analysis results from two geometrically nonlinear finite element codes, ANSYS and ABAQUS, are presented and compared. Strain energy release rates (G) associated with simulated delamination growth in the flexbeams are presented from both codes. These results compare well with each other and suggest that the initial delamination growth from the tip of the ply-drop toward the thick region of the flexbeam is strongly mode II. The peak calculated G values were used with material characterization data to calculate fatigue life curves for comparison with test data. A curve relating maximum surface strain to number of loading cycles at delamination onset compared well with the test results.
Gravitational-Wave and Neutrino Signals from Core-Collapse Supernovae with QCD Phase Transition
NASA Astrophysics Data System (ADS)
Zha, Shuai; Leung, Shing Chi; Lin, Lap Ming; Chu, Ming-Chung
Core-collapse supernovae (CCSNe) mark the catastrophic death of massive stars. We simulate CCSNe with a hybrid equations of state (EOS) containing a QCD (quantum chromodynamics) phase transition. The hybrid EOS incorporates the pure hadronic HShen EOS and the MIT Bag Model, with a Gibbs construction. Our two-dimensional hydrodynamics code includes a fifth-order shock capturing scheme WENO and models neutrino transport with the isotropic diffusion source approximation (IDSA). As the proto-neutron-star accretes matter and the core enters the mixed phase, a second collapse takes place due to softening of the EOS. We calculate the gravitational-wave (GW) and neutrino signals for this kind of CCSNe model. Future detection of these signals from CCSNe may help to constrain this scenario and the hybrid EOS.
Rethinking resources and hybridity
NASA Astrophysics Data System (ADS)
Gonsalves, Allison J.; Seiler, Gale; Salter, Dana E.
2011-06-01
This review explores Alfred Schademan's "What does playing cards have to do with science? A resource-rich view of African American young men" by examining how he uses two key concepts—hybridity and resources—to propose an approach to science education that counters enduring deficit notions associated with this population. Our response to Schademan's work expands upon his definition of hybridity and its purpose in the science classroom and highlights the tensions inherent in the appropriation of student resources in classroom spaces. This conversation points also to the need for research analyses and pedagogical approaches that simultaneously valorize student resources, allow student opportunities to learn the dominant codes, and provide teacher and student opportunities to transform them. Carol Lee's notion of "cultural modeling" is discussed as a possible framing device to facilitate this kind of research.
Hybridizing Gravitationl Waveforms of Inspiralling Binary Neutron Star Systems
NASA Astrophysics Data System (ADS)
Cullen, Torrey; LIGO Collaboration
2016-03-01
Gravitational waves are ripples in space and time and were predicted to be produced by astrophysical systems such as binary neutron stars by Albert Einstein. These are key targets for Laser Interferometer and Gravitational Wave Observatory (LIGO), which uses template waveforms to find weak signals. The simplified template models are known to break down at high frequency, so I wrote code that constructs hybrid waveforms from numerical simulations to accurately cover a large range of frequencies. These hybrid waveforms use Post Newtonian template models at low frequencies and numerical data from simulations at high frequencies. They are constructed by reading in existing Post Newtonian models with the same masses as simulated stars, reading in the numerical data from simulations, and finding the ideal frequency and alignment to ``stitch'' these waveforms together.
NASA Technical Reports Server (NTRS)
Habbal, Shadia R.; Gurman, Joseph (Technical Monitor)
2003-01-01
Investigations of the physical processes responsible for the acceleration of the solar wind were pursued with the development of two new solar wind codes: a hybrid code and a 2-D MHD code. Hybrid simulations were performed to investigate the interaction between ions and parallel propagating low frequency ion cyclotron waves in a homogeneous plasma. In a low-beta plasma such as the solar wind plasma in the inner corona, the proton thermal speed is much smaller than the Alfven speed. Vlasov linear theory predicts that protons are not in resonance with low frequency ion cyclotron waves. However, non-linear effect makes it possible that these waves can strongly heat and accelerate protons. This study has important implications for study of the corona and the solar wind. Low frequency ion cyclotron waves or Alfven waves are commonly observed in the solar wind. Until now, it is believed that these waves are not able to heat the solar wind plasma unless some cascading processes transfer the energy of these waves to high frequency part. However, this study shows that these waves may directly heat and accelerate protons non-linearly. This process may play an important role in the coronal heating and the solar wind acceleration, at least in some parameter space.
A study of power cycles using supercritical carbon dioxide as the working fluid
NASA Astrophysics Data System (ADS)
Schroder, Andrew Urban
A real fluid heat engine power cycle analysis code has been developed for analyzing the zero dimensional performance of a general recuperated, recompression, precompression supercritical carbon dioxide power cycle with reheat and a unique shaft configuration. With the proposed shaft configuration, several smaller compressor-turbine pairs could be placed inside of a pressure vessel in order to avoid high speed, high pressure rotating seals. The small compressor-turbine pairs would share some resemblance with a turbocharger assembly. Variation in fluid properties within the heat exchangers is taken into account by discretizing zero dimensional heat exchangers. The cycle analysis code allows for multiple reheat stages, as well as an option for the main compressor to be powered by a dedicated turbine or an electrical motor. Variation in performance with respect to design heat exchanger pressure drops and minimum temperature differences, precompressor pressure ratio, main compressor pressure ratio, recompression mass fraction, main compressor inlet pressure, and low temperature recuperator mass fraction have been explored throughout a range of each design parameter. Turbomachinery isentropic efficiencies are implemented and the sensitivity of the cycle performance and the optimal design parameters is explored. Sensitivity of the cycle performance and optimal design parameters is studied with respect to the minimum heat rejection temperature and the maximum heat addition temperature. A hybrid stochastic and gradient based optimization technique has been used to optimize critical design parameters for maximum engine thermal efficiency. A parallel design exploration mode was also developed in order to rapidly conduct the parameter sweeps in this design space exploration. A cycle thermal efficiency of 49.6% is predicted with a 320K [47°C] minimum temperature and 923K [650°C] maximum temperature. The real fluid heat engine power cycle analysis code was expanded to study a theoretical recuperated Lenoir cycle using supercritical carbon dioxide as the working fluid. The real fluid cycle analysis code was also enhanced to study a combined cycle engine cascade. Two engine cascade configurations were studied. The first consisted of a traditional open loop gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 65.0% using a peak temperature of 1,890K [1,617°C]. The second configuration consisted of a hybrid natural gas powered solid oxide fuel cell and gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 73.1%. Both configurations had a minimum temperature of 306K [33°C]. The hybrid stochastic and gradient based optimization technique was used to optimize all engine design parameters for each engine in the cascade such that the entire engine cascade achieved the maximum thermal efficiency. The parallel design exploration mode was also utilized in order to understand the impact of different design parameters on the overall engine cascade thermal efficiency. Two dimensional conjugate heat transfer (CHT) numerical simulations of a straight, equal height channel heat exchanger using supercritical carbon dioxide were conducted at various Reynolds numbers and channel lengths.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
Savel'eva, I V; Khatsukov, K X; Savel'eva, E I; Moskvitina, S I; Kovalev, D A; Savel'ev, V N; Kulichenko, A N; Antonenko, A D; Babenyshev, B V
2015-01-01
Improvement of laboratory diagnostics of cholera taking into the account appearance of hybrid variants of cholera vibrio El Tor biovar in the 1990s. Phenotypic and molecular-genetic properties of typical toxigenic (151 strains) and hybrid (102 strains) variants of El Tor biovar cholera vibrios, isolated in the Caucuses in 1970-1990 and 1993-1998, respectively, were studied. Toxigenicity gene DNA fragments, inherent to El Tor biovars or classic, were detected by using a reagent kit "Genes of Vibrio cholerae variant ctxB-rstR-rstC, REF" developed by us. Reagent kit "Genes of V. cholerae variant ctxB-rstR-rstC, REF" is proposed to be used for laboratory diagnostics of cholera during study of material from humans or environmental objects and for identification of V. cholerae 01 on genome level in PCR-analysis as a necessary addition to the classic scheme of bacteriological analysis. Laboratory diagnostics of cholera due to genetically altered (hybrid) variants of cholera vibrio El Tor biovar is based on a complex study of material from humans and environmental objects by routine bacteriologic and PCR-analysis methods with the aim of detection of gene DNA fragments in the studied material, that determine biovar (classic or El Tor), identification of V. cholerae O1 strains with differentiation of El Tor vibrios into typical and altered, as well as determination of enterotoxin, produced by the specific cholera vibrio strain (by the presence ctxB(El) or ctxB(Cl) gene DNA fragment, coding biosynthesis of CT-2 or CT-1, respectively).
Post-acceleration of laser driven protons with a compact high field linac
NASA Astrophysics Data System (ADS)
Sinigardi, Stefano; Londrillo, Pasquale; Rossi, Francesco; Turchetti, Giorgio; Bolton, Paul R.
2013-05-01
We present a start-to-end 3D numerical simulation of a hybrid scheme for the acceleration of protons. The scheme is based on a first stage laser acceleration, followed by a transport line with a solenoid or a multiplet of quadrupoles, and then a post-acceleration section in a compact linac. Our simulations show that from a laser accelerated proton bunch with energy selection at ~ 30MeV, it is possible to obtain a high quality monochromatic beam of 60MeV with intensity at the threshold of interest for medical use. In the present day experiments using solid targets, the TNSA mechanism describes accelerated bunches with an exponential energy spectrum up to a cut-off value typically below ~ 60MeV and wide angular distribution. At the cut-off energy, the number of protons to be collimated and post-accelerated in a hybrid scheme are still too low. We investigate laser-plasma acceleration to improve the quality and number of the injected protons at ~ 30MeV in order to assure efficient post-acceleration in the hybrid scheme. The results are obtained with 3D PIC simulations using a code where optical acceleration with over-dense targets, transport and post-acceleration in a linac can all be investigated in an integrated framework. The high intensity experiments at Nara are taken as a reference benchmarks for our virtual laboratory. If experimentally confirmed, a hybrid scheme could be the core of a medium sized infrastructure for medical research, capable of producing protons for therapy and x-rays for diagnosis, which complements the development of all optical systems.
NASA Astrophysics Data System (ADS)
Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph
2014-03-01
In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.
Sena-Esteves, Miguel; Saeki, Yoshinaga; Camp, Sara M.; Chiocca, E. Antonio; Breakefield, Xandra O.
1999-01-01
We report here on the development and characterization of a novel herpes simplex virus type 1 (HSV-1) amplicon-based vector system which takes advantage of the host range and retention properties of HSV–Epstein-Barr virus (EBV) hybrid amplicons to efficiently convert cells to retrovirus vector producer cells after single-step transduction. The retrovirus genes gag-pol and env (GPE) and retroviral vector sequences were modified to minimize sequence overlap and cloned into an HSV-EBV hybrid amplicon. Retrovirus expression cassettes were used to generate the HSV-EBV-retrovirus hybrid vectors, HERE and HERA, which code for the ecotropic and the amphotropic envelopes, respectively. Retrovirus vector sequences encoding lacZ were cloned downstream from the GPE expression unit. Transfection of 293T/17 cells with amplicon plasmids yielded retrovirus titers between 106 and 107 transducing units/ml, while infection of the same cells with amplicon vectors generated maximum titers 1 order of magnitude lower. Retrovirus titers were dependent on the extent of transduction by amplicon vectors for the same cell line, but different cell lines displayed varying capacities to produce retrovirus vectors even at the same transduction efficiencies. Infection of human and dog primary gliomas with this system resulted in the production of retrovirus vectors for more than 1 week and the long-term retention and increase in transgene activity over time in these cell populations. Although the efficiency of this system still has to be determined in vivo, many applications are foreseeable for this approach to gene delivery. PMID:10559361
Sundararajan, Vignesh; Civetta, Alberto
2011-01-01
Male sex genes have shown a pattern of rapid interspecies divergence at both the coding and gene expression level. A common outcome from crosses between closely-related species is hybrid male sterility. Phenotypic and genetic studies in Drosophila sterile hybrid males have shown that spermatogenesis arrest is postmeiotic with few exceptions, and that most misregulated genes are involved in late stages of spermatogenesis. Comparative studies of gene regulation in sterile hybrids and parental species have mainly used microarrays providing a whole genome representation of regulatory problems in sterile hybrids. Real-time PCR studies can reject or reveal differences not observed in microarray assays. Moreover, differences in gene expression between samples can be dependant on the source of RNA (e.g., whole body vs. tissue). Here we survey expression in D. simulans, D. mauritiana and both intra and interspecies hybrids using a real-time PCR approach for eight genes expressed at the four main stages of sperm development. We find that all genes show a trend toward under expression in the testes of sterile hybrids relative to parental species with only the two proliferation genes (bam and bgcn) and the two meiotic class genes (can and sa) showing significant down regulation. The observed pattern of down regulation for the genes tested can not fully explain hybrid male sterility. We discuss the down regulation of spermatogenesis genes in hybrids between closely-related species within the contest of rapid divergence experienced by the male genome, hybrid sterility and possible allometric changes due to subtle testes-specific developmental abnormalities.
A Numerical Study of the Effects of Curvature and Convergence on Dilution Jet Mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
A numerical study of the effects of curvature and convergence on dilution jet mixing
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Reynolds, R.; White, C.
1987-01-01
An analytical program was conducted to assemble and assess a three-dimensional turbulent viscous flow computer code capable of analyzing the flow field in the transition liners of small gas turbine engines. This code is of the TEACH type with hybrid numerics, and uses the power law and SIMPLER algorithms, an orthogonal curvilinear coordinate system, and an algebraic Reynolds stress turbulence model. The assessments performed in this study, consistent with results in the literature, showed that in its present form this code is capable of predicting trends and qualitative results. The assembled code was used to perform a numerical experiment to investigate the effects of curvature and convergence in the transition liner on the mixing of single and opposed rows of cool dilution jets injected into a hot mainstream flow.
A hybrid LBG/lattice vector quantizer for high quality image coding
NASA Technical Reports Server (NTRS)
Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)
1991-01-01
It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.
The high-β{sub N} hybrid scenario for ITER and FNSF steady-state missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turco, F.; Petty, C. C.; Luce, T. C.
2015-05-15
New experiments on DIII-D have demonstrated the steady-state potential of the hybrid scenario, with 1 MA of plasma current driven fully non-inductively and β{sub N} up to 3.7 sustained for ∼3 s (∼1.5 current diffusion time, τ{sub R}, in DIII-D), providing the basis for an attractive option for steady-state operation in ITER and FNSF. Excellent confinement is achieved (H{sub 98y2} ∼ 1.6) without performance limiting tearing modes. The hybrid regime overcomes the need for off-axis current drive efficiency, taking advantage of poloidal magnetic flux pumping that is believed to be the result of a saturated 3/2 tearing mode. This allows for efficient currentmore » drive close to the axis, without deleterious sawtooth instabilities. In these experiments, the edge surface loop voltage is driven down to zero for >1 τ{sub R} when the poloidal β is increased above 1.9 at a plasma current of 1.0 MA and the ECH power is increased to 3.2 MW. Stationary operation of hybrid plasmas with all on-axis current drive is sustained at pressures slightly above the ideal no-wall limit, while the calculated ideal with-wall MHD limit is β{sub N} ∼ 4–4.5. Off-axis Neutral Beam Injection (NBI) power has been used to broaden the pressure and current profiles in this scenario, seeking to take advantage of higher predicted kink stability limits and lower values of the tearing stability index Δ′, as calculated by the DCON and PEST3 codes. Results based on measured profiles predict ideal limits at β{sub N} > 4.5, 10% higher than the cases with on-axis NBI. A 0-D model, based on the present confinement, β{sub N} and shape values of the DIII-D hybrid scenario, shows that these plasmas are consistent with the ITER 9 MA, Q = 5 mission and the FNSF 6.7 MA scenario with Q = 3.5. With collisionality and edge safety factor values comparable to those envisioned for ITER and FNSF, the high-β{sub N} hybrid represents an attractive high performance option for the steady-state missions of these devices.« less
The gene coding for the B cell surface protein CD19 is localized on human chromosome 16p11.
Stapleton, P; Kozmik, Z; Weith, A; Busslinger, M
1995-02-01
The CD19 gene codes for one of the earliest markers of the human B cell lineage and is a target for the B lymphoid-specific transcription factor BSAP (Pax-5). The transmembrane protein CD19 has been implicated in controlling proliferation of mature B lymphocytes by modulating signal transduction through the antigen receptor. In this study, we have employed Southern blot and fluorescence in situ hybridization analyses to localize the CD19 gene to human chromosome 16p11.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
Geometrical-optics code for computing the optical properties of large dielectric spheres.
Zhou, Xiaobing; Li, Shusun; Stamnes, Knut
2003-07-20
Absorption of electromagnetic radiation by absorptive dielectric spheres such as snow grains in the near-infrared part of the solar spectrum cannot be neglected when radiative properties of snow are computed. Thus a new, to our knowledge, geometrical-optics code is developed to compute scattering and absorption cross sections of large dielectric particles of arbitrary complex refractive index. The number of internal reflections and transmissions are truncated on the basis of the ratio of the irradiance incident at the nth interface to the irradiance incident at the first interface for a specific optical ray. Thus the truncation number is a function of the angle of incidence. Phase functions for both near- and far-field absorption and scattering of electromagnetic radiation are calculated directly at any desired scattering angle by using a hybrid algorithm based on the bisection and Newton-Raphson methods. With these methods a large sphere's absorption and scattering properties of light can be calculated for any wavelength from the ultraviolet to the microwave regions. Assuming that large snow meltclusters (1-cm order), observed ubiquitously in the snow cover during summer, can be characterized as spheres, one may compute absorption and scattering efficiencies and the scattering phase function on the basis of this geometrical-optics method. A geometrical-optics method for sphere (GOMsphere) code is developed and tested against Wiscombe's Mie scattering code (MIE0) and a Monte Carlo code for a range of size parameters. GOMsphere can be combined with MIE0 to calculate the single-scattering properties of dielectric spheres of any size.
Low-speed Aerodynamic Investigations of a Hybrid Wing Body Configuration
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.; Gatlin, Gregory M.; Jenkins, Luther N.; Murphy, Patrick C.; Carter, Melissa B.
2014-01-01
Two low-speed static wind tunnel tests and a water tunnel static and dynamic forced-motion test have been conducted on a hybrid wing-body (HWB) twinjet configuration. These tests, in addition to computational fluid dynamics (CFD) analysis, have provided a comprehensive dataset of the low-speed aerodynamic characteristics of this nonproprietary configuration. In addition to force and moment measurements, the tests included surface pressures, flow visualization, and off-body particle image velocimetry measurements. This paper will summarize the results of these tests and highlight the data that is available for code comparison or additional analysis.
Zhuang, Yongbin; Tripp, Erin A
2017-01-17
New combinations of divergent genomes can give rise to novel genetic functions in resulting hybrid progeny. Such functions may yield opportunities for ecological divergence, contributing ultimately to reproductive isolation and evolutionary longevity of nascent hybrid lineages. In plants, the degree to which transgressive genotypes contribute to floral novelty remains a question of key interest. Here, we generated an F 1 hybrid plant between the red-flowered Ruellia elegans and yellow flowered R. speciosa. RNA-seq technology was used to explore differential gene expression between the hybrid and its two parents, with emphasis on genetic elements involved in the production of floral anthocyanin pigments. The hybrid was purple flowered and produced novel floral delphinidin pigments not manufactured by either parent. We found that nearly a fifth of all 86,475 unigenes expressed were unique to the hybrid. The majority of hybrid unigenes (80.97%) showed a pattern of complete dominance to one parent or the other although this ratio was uneven, suggesting asymmetrical influence of parental genomes on the progeny transcriptome. However, 8.87% of all transcripts within the hybrid were expressed at significantly higher or lower mean levels than observed for either parent. A total of 28 unigenes coding putatively for eight core enzymes in the anthocyanin pathway were recovered, along with three candidate MYBs involved in anthocyanin regulation. Our results suggest that models of gene evolution that explain phenotypic novelty and hybrid establishment in plants may need to include transgressive effects. Additionally, our results lend insight into the potential for floral novelty that derives from unions of divergent genomes. These findings serve as a starting point to further investigate molecular mechanisms involved in flower color transitions in Ruellia.
Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.
Hellander, Stefan; Hellander, Andreas; Petzold, Linda
2017-12-21
The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.
Hybrid simulations of weakly collisional plasmas
NASA Astrophysics Data System (ADS)
Xia, Qian; Reville, Brian; Tzoufras, Michail
2016-10-01
Laser produced plasma experiments can be exploited to investigate phenomena of astrophysical relevance. The high densities and velocities that can be generated in the laboratory provide ideal conditions to investigate weakly collisional or collisionless plasma shock physics. In addition, the high temperatures permit magnetic and kinetic Reynolds numbers that are difficult to achieve in other plasma experiments, opening the possibility to study plasma dynamo. Many of these experiments are based on a classic plasma physics problem, namely the interpenetration of two plasma flows. To investigate this phenomenon, we are constructing a novel multi-dimensional hybrid numerical scheme, that solves the ion distribution kinetically via a Vlasov-Fokker-Planck equation, with electrons providing a charge neutralizing fluid. This allows us to follow the evolution on hydrodynamic timescales, while permitting inclusion ofcollisionlesseffects on small scales. It also could be used to study the increasing collisional effects due to the stiff gradient and weakly anisotropic velocity distribution. We present some preliminary validation tests for the code, demonstrating its ability to accurately model key processes that are relevant to laboratory and astrophysical plasmas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollaway, W.R.
1991-08-01
If there is to be a next generation of nuclear power in the United States, then the four fundamental obstacles confronting nuclear power technology must be overcome: safety, cost, waste management, and proliferation resistance. The Combined Hybrid System (CHS) is proposed as a possible solution to the problems preventing a vigorous resurgence of nuclear power. The CHS combines Thermal Reactors (for operability, safety, and cost) and Integral Fast Reactors (for waste treatment and actinide burning) in a symbiotic large scale system. The CHS addresses the safety and cost issues through the use of advanced reactor designs, the waste management issuemore » through the use of actinide burning, and the proliferation resistance issue through the use of an integral fuel cycle with co-located components. There are nine major components in the Combined Hybrid System linked by nineteen nuclear material mass flow streams. A computer code, CHASM, is used to analyze the mass flow rates CHS, and the reactor support ratio (the ratio of thermal/fast reactors), IFR of the system. The primary advantages of the CHS are its essentially actinide-free high-level radioactive waste, plus improved reactor safety, uranium utilization, and widening of the option base. The primary disadvantages of the CHS are the large capacity of IFRs required (approximately one MW{sub e} IFR capacity for every three MW{sub e} Thermal Reactor) and the novel radioactive waste streams produced by the CHS. The capability of the IFR to burn pure transuranic fuel, a primary assumption of this study, has yet to be proven. The Combined Hybrid System represents an attractive option for future nuclear power development; that disposal of the essentially actinide-free radioactive waste produced by the CHS provides an excellent alternative to the disposal of intact actinide-bearing Light Water Reactor spent fuel (reducing the toxicity based lifetime of the waste from roughly 360,000 years to about 510 years).« less
A tandem mirror plasma source for a hybrid plume plasma propulsion concept
NASA Technical Reports Server (NTRS)
Yang, T. F.; Miller, R. H.; Wenzel, K. W.; Krueger, W. A.; Chang, F. R.
1985-01-01
This paper describes a tandem mirror magnetic plasma confinement device to be considered as a hot plasma source for the hybrid plume rocket concept. The hot plasma from this device is injected into an exhaust duct, which will interact with an annular layer of hypersonic neutral gas. Such a device can be used to study the dynamics of the hybrid plume and to experimentally verify the numerical predictions obtained with computer codes. The basic system design is also geared toward being lightweight and compact, as well as having high power density (i.e., several kW/sq cm) at the exhaust. This feature is aimed toward the feasibility of 'space testing'. The plasma is heated by microwaves. A 50 percent heating efficiency can be obtained by using two half-circle antennas. The preliminary Monte Carlo modeling of test particles result reported here indicates that interaction does take place in the exhaust duct. Neutrals gain energy from the ion, which confirms the hybrid plume concept.
NASA Astrophysics Data System (ADS)
Mann, Stephen
2009-10-01
Understanding how chemically derived processes control the construction and organization of matter across extended and multiple length scales is of growing interest in many areas of materials research. Here we review present equilibrium and non-equilibrium self-assembly approaches to the synthetic construction of discrete hybrid (inorganic-organic) nano-objects and higher-level nanostructured networks. We examine a range of synthetic modalities under equilibrium conditions that give rise to integrative self-assembly (supramolecular wrapping, nanoscale incarceration and nanostructure templating) or higher-order self-assembly (programmed/directed aggregation). We contrast these strategies with processes of transformative self-assembly that use self-organizing media, reaction-diffusion systems and coupled mesophases to produce higher-level hybrid structures under non-equilibrium conditions. Key elements of the constructional codes associated with these processes are identified with regard to existing theoretical knowledge, and presented as a heuristic guideline for the rational design of hybrid nano-objects and nanomaterials.
Performance of hybrid ball bearings in oil and jet fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schrader, S.M.; Pfaffenberger, E.E.
1992-07-01
A 308-size hybrid ball bearing, with ceramic balls and steel rings, was tested using a diester oil and gas turbine fuel as lubricants at several speeds and loads. Heat generation data from this test work was then correlated with the heat generation model from a widely used computer code. The ability of this hybrid split inner ring bearing design to endure thrust reversals, which are expected in many turbine applications, was demonstrated. Finally, the bearing was successfully endurance tested in JP-10 fuel for 25 hours at 7560 N axial load and 36,000 rpm. This work has successfully demonstrated the technologymore » necessary to use fuel-lubricated hybrid bearings in limited-life gas turbine engine applications such as missiles, drones, and other unmanned air vehicles (UAVs). In addition, it has provided guidance for use in designing such bearing systems. As a result, the benefits of removing the conventional oil lubricant system, i.e., design simplification and reduced maintenance, can be realized. 6 refs.« less
2010-01-01
Background The development of new microarray technologies makes custom long oligonucleotide arrays affordable for many experimental applications, notably gene expression analyses. Reliable results depend on probe design quality and selection. Probe design strategy should cope with the limited accuracy of de novo gene prediction programs, and annotation up-dating. We present a novel in silico procedure which addresses these issues and includes experimental screening, as an empirical approach is the best strategy to identify optimal probes in the in silico outcome. Findings We used four criteria for in silico probe selection: cross-hybridization, hairpin stability, probe location relative to coding sequence end and intron position. This latter criterion is critical when exon-intron gene structure predictions for intron-rich genes are inaccurate. For each coding sequence (CDS), we selected a sub-set of four probes. These probes were included in a test microarray, which was used to evaluate the hybridization behavior of each probe. The best probe for each CDS was selected according to three experimental criteria: signal-to-noise ratio, signal reproducibility, and representative signal intensities. This procedure was applied for the development of a gene expression Agilent platform for the filamentous fungus Podospora anserina and the selection of a single 60-mer probe for each of the 10,556 P. anserina CDS. Conclusions A reliable gene expression microarray version based on the Agilent 44K platform was developed with four spot replicates of each probe to increase statistical significance of analysis. PMID:20565839
Dong, Haifeng; Meng, Xiangdan; Dai, Wenhao; Cao, Yu; Lu, Huiting; Zhou, Shufeng; Zhang, Xueji
2015-04-21
Herein, a highly sensitive and selective microRNA (miRNA) detection strategy using DNA-bio-bar-code amplification (BCA) and Nb·BbvCI nicking enzyme-assisted strand cycle for exponential signal amplification was designed. The DNA-BCA system contains a locked nucleic acid (LNA) modified DNA probe for improving hybridization efficiency, while a signal reported molecular beacon (MB) with an endonuclease recognition site was designed for strand cycle amplification. In the presence of target miRNA, the oligonucleotides functionalized magnetic nanoprobe (MNP-DNA) and gold nanoprobe (AuNP-DNA) with numerous reported probes (RP) can hybridize with target miRNA, respectively, to form a sandwich structure. After sandwich structures were separated from the solution by the magnetic field, the RP were released under high temperature to recognize the MB and cleaved the hairpin DNA to induce the dissociation of RP. The dissociated RP then triggered the next strand cycle to produce exponential fluorescent signal amplification for miRNA detection. Under optimized conditions, the exponential signal amplification system shows a good linear range of 6 orders of magnitude (from 0.3 pM to 3 aM) with limit of detection (LOD) down to 52.5 zM, while the sandwich structure renders the system with high selectivity. Meanwhile, the feasibility of the proposed strategy for cell miRNA detection was confirmed by analyzing miRNA-21 in HeLa lysates. Given the high-performance for miRNA analysis, the strategy has a promising application in biological detection and in clinical diagnosis.
JP3D compressed-domain watermarking of volumetric medical data sets
NASA Astrophysics Data System (ADS)
Ouled Zaid, Azza; Makhloufi, Achraf; Olivier, Christian
2010-01-01
Increasing transmission of medical data across multiple user systems raises concerns for medical image watermarking. Additionaly, the use of volumetric images triggers the need for efficient compression techniques in picture archiving and communication systems (PACS), or telemedicine applications. This paper describes an hybrid data hiding/compression system, adapted to volumetric medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization (TCQ), to JP3D encoder. Results of our method applied to Magnetic Resonance (MR) and Computed Tomography (CT) medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can provide relative high data embedding rate whereas keep a relative lower distortion.
WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.
Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W
2014-01-01
Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.
Matter effects on binary neutron star waveforms
NASA Astrophysics Data System (ADS)
Read, Jocelyn S.; Baiotti, Luca; Creighton, Jolien D. E.; Friedman, John L.; Giacomazzo, Bruno; Kyutoku, Koutarou; Markakis, Charalampos; Rezzolla, Luciano; Shibata, Masaru; Taniguchi, Keisuke
2013-08-01
Using an extended set of equations of state and a multiple-group multiple-code collaborative effort to generate waveforms, we improve numerical-relativity-based data-analysis estimates of the measurability of matter effects in neutron-star binaries. We vary two parameters of a parametrized piecewise-polytropic equation of state (EOS) to analyze the measurability of EOS properties, via a parameter Λ that characterizes the quadrupole deformability of an isolated neutron star. We find that, to within the accuracy of the simulations, the departure of the waveform from point-particle (or spinless double black-hole binary) inspiral increases monotonically with Λ and changes in the EOS that did not change Λ are not measurable. We estimate with two methods the minimal and expected measurability of Λ in second- and third-generation gravitational-wave detectors. The first estimate using numerical waveforms alone shows that two EOSs which vary in radius by 1.3 km are distinguishable in mergers at 100 Mpc. The second estimate relies on the construction of hybrid waveforms by matching to post-Newtonian inspiral and estimates that the same EOSs are distinguishable in mergers at 300 Mpc. We calculate systematic errors arising from numerical uncertainties and hybrid construction, and we estimate the frequency at which such effects would interfere with template-based searches.
Hermanns, Pia; Couch, Robert; Leonard, Norma; Klotz, Cherise; Pohlenz, Joachim
2014-01-01
Isolated central congenital hypothyroidism (ICCH) is rare but important. Most ICCH patients are diagnosed later, which results in severe growth failure and intellectual disability. We describe a boy with ICCH due to a large homozygous TSHβ gene deletion. A 51-day-old male Turkish infant, whose parents were first cousins, was admitted for evaluation of prolonged jaundice. His clinical appearance was compatible with hypothyroidism. Venous thyrotropin (TSH) was undetectably low, with a subsequent low free T4 and a low free T3, suggestive of central hypothyroidism. Using different PCR protocols, we could not amplify both coding exons of the boy's TSHβ gene, which suggested a deletion. An array comparative genomic hybridization (aCGH) using specific probes around the TSHβ gene locus showed him to be homozygous for a 6-kb deletion spanning all exons and parts of the 5' untranslated region of the gene. Infants who are clinically suspected of having hypothyroidism should be evaluated thoroughly, even if their TSH-based screening result is normal. In cases with ICCH and undetectably low TSH serum concentrations, a TSHβ gene deletion should be considered; aCGH should be performed when gene deletions are suspected. In such cases, PCR-based sequencing techniques give negative results.
Locating and classifying defects using an hybrid data base
NASA Astrophysics Data System (ADS)
Luna-Avilés, A.; Hernández-Gómez, L. H.; Durodola, J. F.; Urriolagoitia-Calderón, G.; Urriolagoitia-Sosa, G.; Beltrán Fernández, J. A.; Díaz Pineda, A.
2011-07-01
A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200×20×5 mm. One half of them were plain and the other half has a notch (3 mm × 4 mm) which is close to the defect area (19 mm × 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Badal, A
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. Wemore » also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.« less
Manufacturing Methods and Technology Engineering for Tape Chip Carrier.
1981-08-01
equipment and fixtures were used in the manufacturer of the Sync Counter hybrid microcircuit. o Continuous Tape Plater - Model No. STP, Microplate ...Headquarters 001 Commander ATTN: Ray L. Gilbert Naval Ocean Systems Center 608 Independence Ave., SW ATTN: Dr. W. D. McKee, Jr. Washington, DC 20546 Code
English in Political Discourse of Post-Suharto Indonesia.
ERIC Educational Resources Information Center
Bernsten, Suzanne
This paper illustrates increases in the use of English in political speeches in post-Suharto Indonesia by analyzing the phonological, morphological, and syntactic assimilation of loanwords (linguistic borrowing), as well as hybridization and code switching, and phenomena such as doubling and loan translations. The paper also examines the mixed…
Numerical Modeling of the Hall Thruster Discharge
2005-04-01
This collection of seven previously published papers performed under Grant No. FA8655-04-1-3003 provide the background for the development of a new version of the HPHall hybrid code (HPHallv.2) for the numerical modeling of Hall Thruster discharge and new insights on discharge physics obtained during the development.
Strategy and gaps for modeling, simulation, and control of hybrid systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabiti, Cristian; Garcia, Humberto E.; Hovsapian, Rob
2015-04-01
The purpose of this report is to establish a strategy for modeling and simulation of candidate hybrid energy systems. Modeling and simulation is necessary to design, evaluate, and optimize the system technical and economic performance. Accordingly, this report first establishes the simulation requirements to analysis candidate hybrid systems. Simulation fidelity levels are established based on the temporal scale, real and synthetic data availability or needs, solution accuracy, and output parameters needed to evaluate case-specific figures of merit. Accordingly, the associated computational and co-simulation resources needed are established; including physical models when needed, code assembly and integrated solutions platforms, mathematical solvers,more » and data processing. This report first attempts to describe the figures of merit, systems requirements, and constraints that are necessary and sufficient to characterize the grid and hybrid systems behavior and market interactions. Loss of Load Probability (LOLP) and effective cost of Effective Cost of Energy (ECE), as opposed to the standard Levelized Cost of Electricty (LCOE), are introduced as technical and economical indices for integrated energy system evaluations. Financial assessment methods are subsequently introduced for evaluation of non-traditional, hybrid energy systems. Algorithms for coupled and iterative evaluation of the technical and economic performance are subsequently discussed. This report further defines modeling objectives, computational tools, solution approaches, and real-time data collection and processing (in some cases using real test units) that will be required to model, co-simulate, and optimize; (a) an energy system components (e.g., power generation unit, chemical process, electricity management unit), (b) system domains (e.g., thermal, electrical or chemical energy generation, conversion, and transport), and (c) systems control modules. Co-simulation of complex, tightly coupled, dynamic energy systems requires multiple simulation tools, potentially developed in several programming languages and resolved on separate time scales. Whereas further investigation and development of hybrid concepts will provide a more complete understanding of the joint computational and physical modeling needs, this report highlights areas in which co-simulation capabilities are warranted. The current development status, quality assurance, availability and maintainability of simulation tools that are currently available for hybrid systems modeling is presented. Existing gaps in the modeling and simulation toolsets and development needs are subsequently discussed. This effort will feed into a broader Roadmap activity for designing, developing, and demonstrating hybrid energy systems.« less
NASA Astrophysics Data System (ADS)
Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan
2018-05-01
Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.
RNAiFold 2.0: a web server and software to design custom and Rfam-based RNA molecules.
Garcia-Martin, Juan Antonio; Dotu, Ivan; Clote, Peter
2015-07-01
Several algorithms for RNA inverse folding have been used to design synthetic riboswitches, ribozymes and thermoswitches, whose activity has been experimentally validated. The RNAiFold software is unique among approaches for inverse folding in that (exhaustive) constraint programming is used instead of heuristic methods. For that reason, RNAiFold can generate all sequences that fold into the target structure or determine that there is no solution. RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now defunct COMET language to C++. The new code properly extends the capabilities of its predecessor by providing a user-friendly pipeline to design synthetic constructs having the functionality of given Rfam families. In addition, the new software supports amino acid constraints, even for proteins translated in different reading frames from overlapping coding sequences; moreover, structure compatibility/incompatibility constraints have been expanded. With these features, RNAiFold 2.0 allows the user to design single RNA molecules as well as hybridization complexes of two RNA molecules. the web server, source code and linux binaries are publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold2.0. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Visual information processing; Proceedings of the Meeting, Orlando, FL, Apr. 20-22, 1992
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1992-01-01
Topics discussed in these proceedings include nonlinear processing and communications; feature extraction and recognition; image gathering, interpolation, and restoration; image coding; and wavelet transform. Papers are presented on noise reduction for signals from nonlinear systems; driving nonlinear systems with chaotic signals; edge detection and image segmentation of space scenes using fractal analyses; a vision system for telerobotic operation; a fidelity analysis of image gathering, interpolation, and restoration; restoration of images degraded by motion; and information, entropy, and fidelity in visual communication. Attention is also given to image coding methods and their assessment, hybrid JPEG/recursive block coding of images, modified wavelets that accommodate causality, modified wavelet transform for unbiased frequency representation, and continuous wavelet transform of one-dimensional signals by Fourier filtering.
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1992-01-01
A discussion is given of the many factors that affect sonic booms with particular emphasis on the application and development of improved computational fluid dynamics (CFD) codes. The benefits that accrue from interference (induced) lift, distributing lift using canard configurations, the use of wings with dihedral or anhedral and hybrid laminar flow control for drag reduction are detailed. The application of the most advanced codes to a wider variety of configurations along with improved ray-tracing codes to arrive at more accurate and, hopefully, lower sonic booms is advocated. Finally, it is speculated that when all of the latest technology is applied to the design of a supersonic transport it will be found environmentally acceptable.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Stedry, M.H.
1994-07-01
McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detectormore » locations near the source.« less
Turbulence dissipation challenge: particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.
2015-12-01
We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.
Hybrid computational phantoms of the male and female newborn patient: NURBS-based whole-body models
NASA Astrophysics Data System (ADS)
Lee, Choonsik; Lodwick, Daniel; Hasenauer, Deanna; Williams, Jonathan L.; Lee, Choonik; Bolch, Wesley E.
2007-07-01
Anthropomorphic computational phantoms are computer models of the human body for use in the evaluation of dose distributions resulting from either internal or external radiation sources. Currently, two classes of computational phantoms have been developed and widely utilized for organ dose assessment: (1) stylized phantoms and (2) voxel phantoms which describe the human anatomy via mathematical surface equations or 3D voxel matrices, respectively. Although stylized phantoms based on mathematical equations can be very flexible in regard to making changes in organ position and geometrical shape, they are limited in their ability to fully capture the anatomic complexities of human internal anatomy. In turn, voxel phantoms have been developed through image-based segmentation and correspondingly provide much better anatomical realism in comparison to simpler stylized phantoms. However, they themselves are limited in defining organs presented in low contrast within either magnetic resonance or computed tomography images—the two major sources in voxel phantom construction. By definition, voxel phantoms are typically constructed via segmentation of transaxial images, and thus while fine anatomic features are seen in this viewing plane, slice-to-slice discontinuities become apparent in viewing the anatomy of voxel phantoms in the sagittal or coronal planes. This study introduces the concept of a hybrid computational newborn phantom that takes full advantage of the best features of both its stylized and voxel counterparts: flexibility in phantom alterations and anatomic realism. Non-uniform rational B-spline (NURBS) surfaces, a mathematical modeling tool traditionally applied to graphical animation studies, was adopted to replace the limited mathematical surface equations of stylized phantoms. A previously developed whole-body voxel phantom of the newborn female was utilized as a realistic anatomical framework for hybrid phantom construction. The construction of a hybrid phantom is performed in three steps: polygonization of the voxel phantom, organ modeling via NURBS surfaces and phantom voxelization. Two 3D graphic tools, 3D-DOCTOR™ and Rhinoceros™, were utilized to polygonize the newborn voxel phantom and generate NURBS surfaces, while an in-house MATLAB™ code was used to voxelize the resulting NURBS model into a final computational phantom ready for use in Monte Carlo radiation transport calculations. A total of 126 anatomical organ and tissue models, including 38 skeletal sites and 31 cartilage sites, were described within the hybrid phantom using either NURBS or polygon surfaces. A male hybrid newborn phantom was constructed following the development of the female phantom through the replacement of female-specific organs with male-specific organs. The outer body contour and internal anatomy of the NURBS-based phantoms were adjusted to match anthropometric and reference newborn data reported by the International Commission on Radiological Protection in their Publication 89. The voxelization process was designed to accurately convert NURBS models to a voxel phantom with minimum volumetric change. A sensitivity study was additionally performed to better understand how the meshing tolerance and voxel resolution would affect volumetric changes between the hybrid-NURBS and hybrid-voxel phantoms. The male and female hybrid-NURBS phantoms were constructed in a manner so that all internal organs approached their ICRP reference masses to within 1%, with the exception of the skin (-6.5% relative error) and brain (-15.4% relative error). Both hybrid-voxel phantoms were constructed with an isotropic voxel resolution of 0.663 mm—equivalent to the ICRP 89 reference thickness of the newborn skin (dermis and epidermis). Hybrid-NURBS phantoms used to create their voxel counterpart retain the non-uniform scalability of stylized phantoms, while maintaining the anatomic realism of segmented voxel phantoms with respect to organ shape, depth and inter-organ positioning. This work was supported by the National Cancer Institute.
Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Moses, Gregory
2017-10-01
The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.
Cost-effective sequencing of full-length cDNA clones powered by a de novo-reference hybrid assembly.
Kuroshu, Reginaldo M; Watanabe, Junichi; Sugano, Sumio; Morishita, Shinichi; Suzuki, Yutaka; Kasahara, Masahiro
2010-05-07
Sequencing full-length cDNA clones is important to determine gene structures including alternative splice forms, and provides valuable resources for experimental analyses to reveal the biological functions of coded proteins. However, previous approaches for sequencing cDNA clones were expensive or time-consuming, and therefore, a fast and efficient sequencing approach was demanded. We developed a program, MuSICA 2, that assembles millions of short (36-nucleotide) reads collected from a single flow cell lane of Illumina Genome Analyzer to shotgun-sequence approximately 800 human full-length cDNA clones. MuSICA 2 performs a hybrid assembly in which an external de novo assembler is run first and the result is then improved by reference alignment of shotgun reads. We compared the MuSICA 2 assembly with 200 pooled full-length cDNA clones finished independently by the conventional primer-walking using Sanger sequencers. The exon-intron structure of the coding sequence was correct for more than 95% of the clones with coding sequence annotation when we excluded cDNA clones insufficiently represented in the shotgun library due to PCR failure (42 out of 200 clones excluded), and the nucleotide-level accuracy of coding sequences of those correct clones was over 99.99%. We also applied MuSICA 2 to full-length cDNA clones from Toxoplasma gondii, to confirm that its ability was competent even for non-human species. The entire sequencing and shotgun assembly takes less than 1 week and the consumables cost only approximately US$3 per clone, demonstrating a significant advantage over previous approaches.
Dynamic wavefront creation for processing units using a hybrid compactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puthoor, Sooraj; Beckmann, Bradford M.; Yudanov, Dmitri
A method, a non-transitory computer readable medium, and a processor for repacking dynamic wavefronts during program code execution on a processing unit, each dynamic wavefront including multiple threads are presented. If a branch instruction is detected, a determination is made whether all wavefronts following a same control path in the program code have reached a compaction point, which is the branch instruction. If no branch instruction is detected in executing the program code, a determination is made whether all wavefronts following the same control path have reached a reconvergence point, which is a beginning of a program code segment tomore » be executed by both a taken branch and a not taken branch from a previous branch instruction. The dynamic wavefronts are repacked with all threads that follow the same control path, if all wavefronts following the same control path have reached the branch instruction or the reconvergence point.« less
Angled injection: Hybrid fluid film bearings for cryogenic applications
NASA Technical Reports Server (NTRS)
SanAndres, Luis
1995-01-01
A computational bulk-flow analysis for prediction of the force coefficients of hybrid fluid film bearings with angled orifice injection is presented. Past measurements on water-lubricated hybrid bearings with angle orifice injection have demonstrated improved rotordynamic performance with virtual elimination of cross-coupled stiffness coefficients and nul or negative whirl frequency ratios. A simple analysis reveals that the fluid momentum exchange at the orifice discharge produces a pressure rise in the recess which retards the shear flow induced by journal rotation, and consequently, reduces cross-coupling forces. The predictions from the model correlate well with experimental measurements from a radial and 45 deg angled orifice injection, five recess water hybrid bearings (C = 125 microns) operating at 10.2, 17.4, and 24.6 krpm and with nominal supply pressures equal to 4, 5.5, and 7 MPa. An application example for a liquid oxygen six recess/pad hybrid journal bearing shows the advantages of tangential orifice injection on the rotordynamic force coefficients and stability indicator for forward whirl motions and without performance degradation on direct stiffness and damping coefficients. The computer program generated, 'hydrojet,' extends and complements previously developed codes.
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems
Stefanini, Fabio; Neftci, Emre O.; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS. PMID:25232314
PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems.
Stefanini, Fabio; Neftci, Emre O; Sheik, Sadique; Indiveri, Giacomo
2014-01-01
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Cobbin, Joanna C. A.; Ong, Chi; Verity, Erin; Gilbertson, Brad P.; Rockman, Steven P.
2014-01-01
ABSTRACT Egg-grown influenza vaccine yields are maximized by infection with a seed virus produced by “classical reassortment” of a seasonal isolate with a highly egg-adapted strain. Seed viruses are selected based on a high-growth phenotype and the presence of the seasonal hemagglutinin (HA) and neuraminidase (NA) surface antigens. Retrospective analysis of H3N2 vaccine seed viruses indicated that, unlike other internal proteins that were predominantly derived from the high-growth parent A/Puerto Rico/8/34 (PR8), the polymerase subunit PB1 could be derived from either parent depending on the seasonal strain. We have recently shown that A/Udorn/307/72 (Udorn) models a seasonal isolate that yields reassortants bearing the seasonal PB1 gene. This is despite the fact that the reverse genetics-derived virus that includes Udorn PB1 with Udorn HA and NA on a PR8 background has inferior growth compared to the corresponding virus with PR8 PB1. Here we use competitive plasmid transfections to investigate the mechanisms driving selection of a less fit virus and show that the Udorn PB1 gene segment cosegregates with the Udorn NA gene segment. Analysis of chimeric PB1 genes revealed that the coselection of NA and PB1 segments was not directed through the previously identified packaging sequences but through interactions involving the internal coding region of the PB1 gene. This study identifies associations between viral genes that can direct selection in classical reassortment for vaccine production and which may also be of relevance to the gene constellations observed in past antigenic shift events where creation of a pandemic virus has involved reassortment. IMPORTANCE Influenza vaccine must be produced and administered in a timely manner in order to provide protection during the winter season, and poor-growing vaccine seed viruses can compromise this process. To maximize vaccine yields, manufacturers create hybrid influenza viruses with gene segments encoding the surface antigens from a seasonal virus isolate, important for immunity, and others from a virus with high growth properties. This involves coinfection of cells with both parent viruses and selection of dominant progeny bearing the seasonal antigens. We show that this method of creating hybrid viruses does not necessarily select for the best yielding virus because preferential pairing of gene segments when progeny viruses are produced determines the genetic makeup of the hybrids. This not only has implications for how hybrid viruses are selected for vaccine production but also sheds light on what drives and limits hybrid gene combinations that arise in nature, leading to pandemics. PMID:24872588
Cobbin, Joanna C A; Ong, Chi; Verity, Erin; Gilbertson, Brad P; Rockman, Steven P; Brown, Lorena E
2014-08-01
Egg-grown influenza vaccine yields are maximized by infection with a seed virus produced by "classical reassortment" of a seasonal isolate with a highly egg-adapted strain. Seed viruses are selected based on a high-growth phenotype and the presence of the seasonal hemagglutinin (HA) and neuraminidase (NA) surface antigens. Retrospective analysis of H3N2 vaccine seed viruses indicated that, unlike other internal proteins that were predominantly derived from the high-growth parent A/Puerto Rico/8/34 (PR8), the polymerase subunit PB1 could be derived from either parent depending on the seasonal strain. We have recently shown that A/Udorn/307/72 (Udorn) models a seasonal isolate that yields reassortants bearing the seasonal PB1 gene. This is despite the fact that the reverse genetics-derived virus that includes Udorn PB1 with Udorn HA and NA on a PR8 background has inferior growth compared to the corresponding virus with PR8 PB1. Here we use competitive plasmid transfections to investigate the mechanisms driving selection of a less fit virus and show that the Udorn PB1 gene segment cosegregates with the Udorn NA gene segment. Analysis of chimeric PB1 genes revealed that the coselection of NA and PB1 segments was not directed through the previously identified packaging sequences but through interactions involving the internal coding region of the PB1 gene. This study identifies associations between viral genes that can direct selection in classical reassortment for vaccine production and which may also be of relevance to the gene constellations observed in past antigenic shift events where creation of a pandemic virus has involved reassortment. Influenza vaccine must be produced and administered in a timely manner in order to provide protection during the winter season, and poor-growing vaccine seed viruses can compromise this process. To maximize vaccine yields, manufacturers create hybrid influenza viruses with gene segments encoding the surface antigens from a seasonal virus isolate, important for immunity, and others from a virus with high growth properties. This involves coinfection of cells with both parent viruses and selection of dominant progeny bearing the seasonal antigens. We show that this method of creating hybrid viruses does not necessarily select for the best yielding virus because preferential pairing of gene segments when progeny viruses are produced determines the genetic makeup of the hybrids. This not only has implications for how hybrid viruses are selected for vaccine production but also sheds light on what drives and limits hybrid gene combinations that arise in nature, leading to pandemics. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
NASA Astrophysics Data System (ADS)
Belen'kii, Mikhail S.; Rye, Vincent; Runyeon, Hope
2007-09-01
A concept of a Hybrid Wavefront-based Stochastic Parallel Gradient Decent (WSPGD) Adaptive Optics (AO) system for correcting the combined effects of Beacon Anisoplanatism and Thermal Blooming is introduced. This system integrates a conventional phase conjugate (PC) AO system with a WSPGD AO system. It uses on-axis wavefront measurements of a laser return from an extended beacon to generate initial deformable mirror (DM) commands. Since high frequency phase components are removed from the wavefront of a laser return by a low-pass filter effect of an extended beacon, the system also uses off-axis wavefront measurements to provide feedback for a multi-dithering beam control algorithm in order to generate additional DM commands that account for those missing high frequency phase components. Performance of the Hybrid WSPGD AO system was evaluated in simulation using a wave optics code. Numerical analysis was performed for two tactical scenarios that included ranges of L = 2 km and L = 20 km, ratio of aperture diameter to Fried parameter, D/r 0, of up to 15, ratio of beam spot size at the target to isoplanatic angle, θ B/θ 0, of up to 40, and general distortion number characterizing the strength of Thermal Blooming, N d = 50, 75, and 100. A line-of-sight in the corrected beam was stabilized using a target-plane tracker. The simulation results reveal that the Hybrid WSPGD AO system can efficiently correct the effects of Beacon Anisoplanatism and Thermal Blooming, providing improved compensation of Thermal Blooming in the presence of strong turbulence. Simulation results also indicate that the Hybrid WSPGD AO system outperforms a conventional PC AO system, increasing the Strehl ratio by up to 300% in less than 50 iterations. A follow-on laboratory demonstration performed under a separate program confirmed our theoretical predictions.
Chang, Zhenyi; Chen, Zhufeng; Wang, Na; Xie, Gang; Lu, Jiawei; Yan, Wei; Zhou, Junli; Tang, Xiaoyan; Deng, Xing Wang
2016-01-01
The breeding and large-scale adoption of hybrid seeds is an important achievement in agriculture. Rice hybrid seed production uses cytoplasmic male sterile lines or photoperiod/thermo-sensitive genic male sterile lines (PTGMS) as female parent. Cytoplasmic male sterile lines are propagated via cross-pollination by corresponding maintainer lines, whereas PTGMS lines are propagated via self-pollination under environmental conditions restoring male fertility. Despite huge successes, both systems have their intrinsic drawbacks. Here, we constructed a rice male sterility system using a nuclear gene named Oryza sativa No Pollen 1 (OsNP1). OsNP1 encodes a putative glucose–methanol–choline oxidoreductase regulating tapetum degeneration and pollen exine formation; it is specifically expressed in the tapetum and miscrospores. The osnp1 mutant plant displays normal vegetative growth but complete male sterility insensitive to environmental conditions. OsNP1 was coupled with an α-amylase gene to devitalize transgenic pollen and the red fluorescence protein (DsRed) gene to mark transgenic seed and transformed into the osnp1 mutant. Self-pollination of the transgenic plant carrying a single hemizygous transgene produced nontransgenic male sterile and transgenic fertile seeds in 1:1 ratio that can be sorted out based on the red fluorescence coded by DsRed. Cross-pollination of the fertile transgenic plants to the nontransgenic male sterile plants propagated the male sterile seeds of high purity. The male sterile line was crossed with ∼1,200 individual rice germplasms available. Approximately 85% of the F1s outperformed their parents in per plant yield, and 10% out-yielded the best local cultivars, indicating that the technology is promising in hybrid rice breeding and production. PMID:27864513
SiSeRHMap v1.0: a simulator for mapped seismic response using a hybrid model
NASA Astrophysics Data System (ADS)
Grelle, Gerardo; Bonito, Laura; Lampasi, Alessandro; Revellino, Paola; Guerriero, Luigi; Sappa, Giuseppe; Guadagno, Francesco Maria
2016-04-01
The SiSeRHMap (simulator for mapped seismic response using a hybrid model) is a computerized methodology capable of elaborating prediction maps of seismic response in terms of acceleration spectra. It was realized on the basis of a hybrid model which combines different approaches and models in a new and non-conventional way. These approaches and models are organized in a code architecture composed of five interdependent modules. A GIS (geographic information system) cubic model (GCM), which is a layered computational structure based on the concept of lithodynamic units and zones, aims at reproducing a parameterized layered subsoil model. A meta-modelling process confers a hybrid nature to the methodology. In this process, the one-dimensional (1-D) linear equivalent analysis produces acceleration response spectra for a specified number of site profiles using one or more input motions. The shear wave velocity-thickness profiles, defined as trainers, are randomly selected in each zone. Subsequently, a numerical adaptive simulation model (Emul-spectra) is optimized on the above trainer acceleration response spectra by means of a dedicated evolutionary algorithm (EA) and the Levenberg-Marquardt algorithm (LMA) as the final optimizer. In the final step, the GCM maps executor module produces a serial map set of a stratigraphic seismic response at different periods, grid solving the calibrated Emul-spectra model. In addition, the spectra topographic amplification is also computed by means of a 3-D validated numerical prediction model. This model is built to match the results of the numerical simulations related to isolate reliefs using GIS morphometric data. In this way, different sets of seismic response maps are developed on which maps of design acceleration response spectra are also defined by means of an enveloping technique.
A Mutation of the Prdm9 Mouse Hybrid Sterility Gene Carried by a Transgene.
Mihola, O; Trachtulec, Z
2017-01-01
PRDM9 is a protein with histone-3-methyltransferase activity, which specifies the sites of meiotic recombination in mammals. Deficiency of the Prdm9 gene in the laboratory mouse results in complete arrest of the meiotic prophase of both sexes. Moreover, the combination of certain PRDM9 alleles from different mouse subspecies causes hybrid sterility, e.g., the male-specific meiotic arrest found in the (PWD/Ph × C57BL/6J)F1 animals. The fertility of all these mice can be rescued using a Prdm9-containing transgene. Here we characterized a transgene made from the clone RP24-346I22 that was expected to encompass the entire Prdm9 gene. Both (PWD/Ph × C57BL/6J)F1 intersubspecific hybrid males and Prdm9-deficient laboratory mice of both sexes carrying this transgene remained sterile, suggesting that Prdm9 inactivation occurred in the Tg(RP24-346I22) transgenics. Indeed, comparative qRT-PCR analysis of testicular RNAs from transgene-positive versus negative animals revealed similar expression levels of Prdm9 mRNAs from the exons encoding the C-terminal part of the protein but elevated expression from the regions coding for the N-terminus of PRDM9, indicating that the transgenic carries a new null Prdm9 allele. Two naturally occurring alternative Prdm9 mRNA isoforms were overexpressed in Tg(RP24-346I22), one formed via splicing to a 3'-terminal exon consisting of short interspersed element B2 and one isoform including an alternative internal exon of 28 base pairs. However, the overexpression of these alternative transcripts was apparently insufficient for Prdm9 function or for increasing the fertility of the hybrid males.
NASA Astrophysics Data System (ADS)
Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.
2015-12-01
The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.
Chang, Zhenyi; Chen, Zhufeng; Wang, Na; Xie, Gang; Lu, Jiawei; Yan, Wei; Zhou, Junli; Tang, Xiaoyan; Deng, Xing Wang
2016-12-06
The breeding and large-scale adoption of hybrid seeds is an important achievement in agriculture. Rice hybrid seed production uses cytoplasmic male sterile lines or photoperiod/thermo-sensitive genic male sterile lines (PTGMS) as female parent. Cytoplasmic male sterile lines are propagated via cross-pollination by corresponding maintainer lines, whereas PTGMS lines are propagated via self-pollination under environmental conditions restoring male fertility. Despite huge successes, both systems have their intrinsic drawbacks. Here, we constructed a rice male sterility system using a nuclear gene named Oryza sativa No Pollen 1 (OsNP1). OsNP1 encodes a putative glucose-methanol-choline oxidoreductase regulating tapetum degeneration and pollen exine formation; it is specifically expressed in the tapetum and miscrospores. The osnp1 mutant plant displays normal vegetative growth but complete male sterility insensitive to environmental conditions. OsNP1 was coupled with an α-amylase gene to devitalize transgenic pollen and the red fluorescence protein (DsRed) gene to mark transgenic seed and transformed into the osnp1 mutant. Self-pollination of the transgenic plant carrying a single hemizygous transgene produced nontransgenic male sterile and transgenic fertile seeds in 1:1 ratio that can be sorted out based on the red fluorescence coded by DsRed Cross-pollination of the fertile transgenic plants to the nontransgenic male sterile plants propagated the male sterile seeds of high purity. The male sterile line was crossed with ∼1,200 individual rice germplasms available. Approximately 85% of the F1s outperformed their parents in per plant yield, and 10% out-yielded the best local cultivars, indicating that the technology is promising in hybrid rice breeding and production.
Known-plaintext attack on a joint transform correlator encrypting system.
Barrera, John Fredy; Vargas, Carlos; Tebaldi, Myrian; Torroba, Roberto; Bolognini, Nestor
2010-11-01
We demonstrate in this Letter that a joint transform correlator shows vulnerability to known-plaintext attacks. An unauthorized user, who intercepts both an object and its encrypted version, can obtain the security key code mask. In this contribution, we conduct a hybrid heuristic attack scheme merge to a Gerchberg-Saxton routine to estimate the encrypting key to decode different ciphertexts encrypted with that same key. We also analyze the success of this attack for different pairs of plaintext-ciphertext used to get the encrypting code. We present simulation results for the decrypting procedure to demonstrate the validity of our analysis.
Supporting Operational Data Assimilation Capabilities to the Research Community
NASA Astrophysics Data System (ADS)
Shao, H.; Hu, M.; Stark, D. R.; Zhou, C.; Beck, J.; Ge, G.
2017-12-01
The Developmental Testbed Center (DTC), in partnership with the National Centers for Environmental Prediction (NCEP) and other operational and research institutions, provides operational data assimilation capabilities to the research community and helps transition research advances to operations. The primary data assimilation system supported currently by the DTC is the Gridpoint Statistical Interpolation (GSI) system and the National Oceanic and Atmospheric Administration (NOAA) Ensemble Kalman Filter (EnKF) system. GSI is a variational based system being used for daily operations at NOAA, NCEP, the National Aeronautics and Space Administration, and other operational agencies. Recently, GSI has evolved into a four-dimensional EnVar system. Since 2009, the DTC has been releasing the GSI code to the research community annually and providing user support. In addition to GSI, the DTC, in 2015, began supporting the ensemble based EnKF data assimilation system. EnKF shares the observation operator with GSI and therefore, just as GSI, can assimilate both conventional and non-conventional data (e.g., satellite radiance). Currently, EnKF is being implemented as part of the GSI based hybrid EnVar system for NCEP Global Forecast System operations. This paper will summarize the current code management and support framework for these two systems. Following that is a description of available community services and facilities. Also presented is the pathway for researchers to contribute their development to the daily operations of these data assimilation systems.
Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal
The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less
Towards fault tolerant adiabatic quantum computation.
Lidar, Daniel A
2008-04-25
I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology involving dynamical decoupling, subsystem and stabilizer codes, and energy gaps. Corresponding error bounds are derived. As an example, I show how to perform decoherence-protected AQC against local noise using at most two-body interactions.
What Four Skills? Redefining Language and Literacy Standards for ELT in the Digital Era
ERIC Educational Resources Information Center
Lotherington, Heather
2004-01-01
Over the last 15 years, the rapid development of information and communication technologies (ICT) has facilitated a revolution in how we use language. Online environments have facilitated creative and variable spelling using code hybridization and stylistic use of mechanical conventions such as punctuation and capitalization, lexical coinages, new…
ERIC Educational Resources Information Center
Martin, Nelly
2017-01-01
This study explores the relationship between language selection and identity construction in contemporary Indonesia through an examination of the function of English, a language that still receives stigma from many Indonesians and the government, particularly in Indonesian popular texts published after 1998. Utilizing hybrid critical approaches…
40 CFR 51.120 - Requirements for State Implementation Plan revisions relating to new motor vehicles.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (vii) The provisions for hybrid electric vehicles (HEVs), as defined in Title 13 California Code of... Plan revisions relating to new motor vehicles. 51.120 Section 51.120 Protection of Environment... revisions relating to new motor vehicles. (a) The EPA Administrator finds that the State Implementation...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-02
.... Monitoring Impact of FY 2012 Policy Changes and Certain SNF Practices A. RUG Distributions B. Group Therapy... Common Procedure Coding System HR-III Hybrid Resource Utilization Groups, Version 3 IHS IGI (Information... OCN OMB Control Number OMB Office of Management and Budget OMRA Other Medicare-Required Assessment PPS...