Jiang, Xiaoou; Yu, Han; Teo, Cui Rong; Tan, Genim Siu Xian; Goh, Sok Chin; Patel, Parasvi; Chua, Yiqiang Kevin; Hameed, Nasirah Banu Sahul; Bertoletti, Antonio; Patzel, Volker
2016-09-01
Dumbbell-shaped DNA minimal vectors lacking nontherapeutic genes and bacterial sequences are considered a stable, safe alternative to viral, nonviral, and naked plasmid-based gene-transfer systems. We investigated novel molecular features of dumbbell vectors aiming to reduce vector size and to improve the expression of noncoding or coding RNA. We minimized small hairpin RNA (shRNA) or microRNA (miRNA) expressing dumbbell vectors in size down to 130 bp generating the smallest genetic expression vectors reported. This was achieved by using a minimal H1 promoter with integrated transcriptional terminator transcribing the RNA hairpin structure around the dumbbell loop. Such vectors were generated with high conversion yields using a novel protocol. Minimized shRNA-expressing dumbbells showed accelerated kinetics of delivery and transcription leading to enhanced gene silencing in human tissue culture cells. In primary human T cells, minimized miRNA-expressing dumbbells revealed higher stability and triggered stronger target gene suppression as compared with plasmids and miRNA mimics. Dumbbell-driven gene expression was enhanced up to 56- or 160-fold by implementation of an intron and the SV40 enhancer compared with control dumbbells or plasmids. Advanced dumbbell vectors may represent one option to close the gap between durable expression that is achievable with integrating viral vectors and short-term effects triggered by naked RNA.
Moura, Felipe Arruda; van Emmerik, Richard E A; Santana, Juliana Exel; Martins, Luiz Eduardo Barreto; Barros, Ricardo Machado Leite de; Cunha, Sergio Augusto
2016-12-01
The purpose of this study was to investigate the coordination between teams spread during football matches using cross-correlation and vector coding techniques. Using a video-based tracking system, we obtained the trajectories of 257 players during 10 matches. Team spread was calculated as functions of time. For a general coordination description, we calculated the cross-correlation between the signals. Vector coding was used to identify the coordination patterns between teams during offensive sequences that ended in shots on goal or defensive tackles. Cross-correlation showed that opponent teams have a tendency to present in-phase coordination, with a short time lag. During offensive sequences, vector coding results showed that, although in-phase coordination dominated, other patterns were observed. We verified that during the early stages, offensive sequences ending in shots on goal present greater anti-phase and attacking team phase periods, compared to sequences ending in tackles. Results suggest that the attacking team may seek to present a contrary behaviour of its opponent (or may lead the adversary behaviour) in the beginning of the attacking play, regarding to the distribution strategy, to increase the chances of a shot on goal. The techniques allowed detecting the coordination patterns between teams, providing additional information about football dynamics and players' interaction.
The fourfold way of the genetic code.
Jiménez-Montaño, Miguel Angel
2009-11-01
We describe a compact representation of the genetic code that factorizes the table in quartets. It represents a "least grammar" for the genetic language. It is justified by the Klein-4 group structure of RNA bases and codon doublets. The matrix of the outer product between the column-vector of bases and the corresponding row-vector V(T)=(C G U A), considered as signal vectors, has a block structure consisting of the four cosets of the KxK group of base transformations acting on doublet AA. This matrix, translated into weak/strong (W/S) and purine/pyrimidine (R/Y) nucleotide classes, leads to a code table with mixed and unmixed families in separate regions. A basic difference between them is the non-commuting (R/Y) doublets: AC/CA, GU/UG. We describe the degeneracy in the canonical code and the systematic changes in deviant codes in terms of the divisors of 24, employing modulo multiplication groups. We illustrate binary sub-codes characterizing mutations in the quartets. We introduce a decision-tree to predict the mode of tRNA recognition corresponding to each codon, and compare our result with related findings by Jestin and Soulé [Jestin, J.-L., Soulé, C., 2007. Symmetries by base substitutions in the genetic code predict 2' or 3' aminoacylation of tRNAs. J. Theor. Biol. 247, 391-394], and the rearrangements of the table by Delarue [Delarue, M., 2007. An asymmetric underlying rule in the assignment of codons: possible clue to a quick early evolution of the genetic code via successive binary choices. RNA 13, 161-169] and Rodin and Rodin [Rodin, S.N., Rodin, A.S., 2008. On the origin of the genetic code: signatures of its primordial complementarity in tRNAs and aminoacyl-tRNA synthetases. Heredity 100, 341-355], respectively.
Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU
NASA Astrophysics Data System (ADS)
Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.
1982-06-01
In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.
Accelerated Modeling and New Ferroelectric Materials for Naval SONAR
2004-06-01
AN other platforms was achieved. As expected, proper into BZ leads to a development of small polarization, vectorization and optimal memory usage were...polarization is due to a combination code was fully vectorized , a speed-up of 9.2 times over of large Ag off-centering and small displacements by the Pentium 4... Xeon and 6.6 times over the SGI 03K was other cations. The large Ag displacements are due to a achieved. We are currently using the X1 in production
Flavivirus RNAi suppression: decoding non-coding RNA.
Pijlman, Gorben P
2014-08-01
Flaviviruses are important human pathogens that are transmitted by invertebrate vectors, mostly mosquitoes and ticks. During replication in their vector, flaviviruses are subject to a potent innate immune response known as antiviral RNA interference (RNAi). This defense mechanism is associated with the production of small interfering (si)RNA that lead to degradation of viral RNA. To what extent flaviviruses would benefit from counteracting antiviral RNAi is subject of debate. Here, the experimental evidence to suggest the existence of flavivirus RNAi suppressors is discussed. I will highlight the putative role of non-coding, subgenomic flavivirus RNA in suppression of RNAi in insect and mammalian cells. Novel insights from ongoing research will reveal how arthropod-borne viruses modulate innate immunity including antiviral RNAi. Copyright © 2014 Elsevier B.V. All rights reserved.
Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N
2012-01-01
Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.
Performance of a three-dimensional Navier-Stokes code on CYBER 205 for high-speed juncture flows
NASA Technical Reports Server (NTRS)
Lakshmanan, B.; Tiwari, S. N.
1987-01-01
A vectorized 3D Navier-Stokes code has been implemented on CYBER 205 for solving the supersonic laminar flow over a swept fin/flat plate junction. The code extends MacCormack's predictor-corrector finite volume scheme to a generalized coordinate system in a locally one dimensional time split fashion. A systematic parametric study is conducted to examine the effect of fin sweep on the computed flow field. Calculated results for the pressure distribution on the flat plate and fin leading edge are compared with the experimental measurements of a right angle blunt fin/flat plate junction. The decrease in the extent of the separated flow region and peak pressure on the fin leading edge, and weakening of the two reversed supersonic zones with increase in fin sweep have been clearly observed in the numerical simulation.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Experience with a vectorized general circulation weather model on Star-100
NASA Technical Reports Server (NTRS)
Soll, D. B.; Habra, N. R.; Russell, G. L.
1977-01-01
A version of an atmospheric general circulation model was vectorized to run on a CDC STAR 100. The numerical model was coded and run in two different vector languages, CDC and LRLTRAN. A factor of 10 speed improvement over an IBM 360/95 was realized. Efficient use of the STAR machine required some redesigning of algorithms and logic. This precludes the application of vectorizing compilers on the original scalar code to achieve the same results. Vector languages permit a more natural and efficient formulation for such numerical codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less
Image Coding Based on Address Vector Quantization.
NASA Astrophysics Data System (ADS)
Feng, Yushu
Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing Adaptive VQ Technique" is presented. In addition to chapters 2 through 6 which report on new work, this dissertation includes one chapter (chapter 1) and part of chapter 2 which review previous work on VQ and image coding, respectively. Finally, a short discussion of directions for further research is presented in conclusion.
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
A Performance Evaluation of the Cray X1 for Scientific Applications
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David
2004-01-01
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements.
Vector and Raster Data Storage Based on Morton Code
NASA Astrophysics Data System (ADS)
Zhou, G.; Pan, Q.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Liu, X.
2018-05-01
Even though geomatique is so developed nowadays, the integration of spatial data in vector and raster formats is still a very tricky problem in geographic information system environment. And there is still not a proper way to solve the problem. This article proposes a method to interpret vector data and raster data. In this paper, we saved the image data and building vector data of Guilin University of Technology to Oracle database. Then we use ADO interface to connect database to Visual C++ and convert row and column numbers of raster data and X Y of vector data to Morton code in Visual C++ environment. This method stores vector and raster data to Oracle Database and uses Morton code instead of row and column and X Y to mark the position information of vector and raster data. Using Morton code to mark geographic information enables storage of data make full use of storage space, simultaneous analysis of vector and raster data more efficient and visualization of vector and raster more intuitive. This method is very helpful for some situations that need to analyse or display vector data and raster data at the same time.
Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan
2017-10-03
Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.
Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan
2017-01-01
Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes. PMID:29108274
Fast image interpolation for motion estimation using graphics hardware
NASA Astrophysics Data System (ADS)
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
Global MHD simulation of magnetosphere using HPF
NASA Astrophysics Data System (ADS)
Ogino, T.
We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5% using 56 PEs of Fujitsu VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.
Neighboring block based disparity vector derivation for multiview compatible 3D-AVC
NASA Astrophysics Data System (ADS)
Kang, Jewon; Chen, Ying; Zhang, Li; Zhao, Xin; Karczewicz, Marta
2013-09-01
3D-AVC being developed under Joint Collaborative Team on 3D Video Coding (JCT-3V) significantly outperforms the Multiview Video Coding plus Depth (MVC+D) which simultaneously encodes texture views and depth views with the multiview extension of H.264/AVC (MVC). However, when the 3D-AVC is configured to support multiview compatibility in which texture views are decoded without depth information, the coding performance becomes significantly degraded. The reason is that advanced coding tools incorporated into the 3D-AVC do not perform well due to the lack of a disparity vector converted from the depth information. In this paper, we propose a disparity vector derivation method utilizing only the information of texture views. Motion information of neighboring blocks is used to determine a disparity vector for a macroblock, so that the derived disparity vector is efficiently used for the coding tools in 3D-AVC. The proposed method significantly improves a coding gain of the 3D-AVC in the multiview compatible mode about 20% BD-rate saving in the coded views and 26% BD-rate saving in the synthesized views on average.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Suerth, Julia D; Maetzig, Tobias; Brugman, Martijn H; Heinz, Niels; Appelt, Jens-Uwe; Kaufmann, Kerstin B; Schmidt, Manfred; Grez, Manuel; Modlich, Ute; Baum, Christopher; Schambach, Axel
2012-01-01
Comparative integrome analyses have highlighted alpharetroviral vectors with a relatively neutral, and thus favorable, integration spectrum. However, previous studies used alpharetroviral vectors harboring viral coding sequences and intact long-terminal repeats (LTRs). We recently developed self-inactivating (SIN) alpharetroviral vectors with an advanced split-packaging design. In a murine bone marrow (BM) transplantation model we now compared alpharetroviral, gammaretroviral, and lentiviral SIN vectors and showed that all vectors transduced hematopoietic stem cells (HSCs), leading to comparable, sustained multilineage transgene expression in primary and secondary transplanted mice. Alpharetroviral integrations were decreased near transcription start sites, CpG islands, and potential cancer genes compared with gammaretroviral, and decreased in genes compared with lentiviral integrations. Analyzing the transcriptome and intragenic integrations in engrafting cells, we observed stronger correlations between in-gene integration targeting and transcriptional activity for gammaretroviral and lentiviral vectors than for alpharetroviral vectors. Importantly, the relatively “extragenic” alpharetroviral integration pattern still supported long-term transgene expression upon serial transplantation. Furthermore, sensitive genotoxicity studies revealed a decreased immortalization incidence compared with gammaretroviral and lentiviral SIN vectors. We conclude that alpharetroviral SIN vectors have a favorable integration pattern which lowers the risk of insertional mutagenesis while supporting long-term transgene expression in the progeny of transplanted HSCs. PMID:22334016
A Performance Evaluation of the Cray X1 for Scientific Applications
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Borrill, Julian; Canning, Andrew; Carter, Jonathan; Djomehri, M. Jahed; Shan, Hongzhang; Skinner, David
2003-01-01
The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers because of their generality, scalability, and cost effectiveness. However, the recent development of massively parallel vector systems is having a significant effect on the supercomputing landscape. In this paper, we compare the performance of the recently-released Cray X1 vector system with that of the cacheless NEC SX-6 vector machine, and the superscalar cache-based IBM Power3 and Power4 architectures for scientific applications. Overall results demonstrate that the X1 is quite promising, but performance improvements are expected as the hardware, systems software, and numerical libraries mature. Code reengineering to effectively utilize the complex architecture may also lead to significant efficiency enhancements.
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Singer product apertures-A coded aperture system with a fast decoding algorithm
NASA Astrophysics Data System (ADS)
Byard, Kevin; Shutler, Paul M. E.
2017-06-01
A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.
Global Magnetohydrodynamic Simulation Using High Performance FORTRAN on Parallel Computers
NASA Astrophysics Data System (ADS)
Ogino, T.
High Performance Fortran (HPF) is one of modern and common techniques to achieve high performance parallel computation. We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5 VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.
Multiprocessing MCNP on an IBN RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-03-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Vector Potential Generation for Numerical Relativity Simulations
NASA Astrophysics Data System (ADS)
Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian
2017-01-01
Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436
Insertion of operation-and-indicate instructions for optimized SIMD code
Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K
2013-06-04
Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.
A hybrid LBG/lattice vector quantizer for high quality image coding
NASA Technical Reports Server (NTRS)
Ramamoorthy, V.; Sayood, K.; Arikan, E. (Editor)
1991-01-01
It is well known that a vector quantizer is an efficient coder offering a good trade-off between quantization distortion and bit rate. The performance of a vector quantizer asymptotically approaches the optimum bound with increasing dimensionality. A vector quantized image suffers from the following types of degradations: (1) edge regions in the coded image contain staircase effects, (2) quasi-constant or slowly varying regions suffer from contouring effects, and (3) textured regions lose details and suffer from granular noise. All three of these degradations are due to the finite size of the code book, the distortion measures used in the design, and due to the finite training procedure involved in the construction of the code book. In this paper, we present an adaptive technique which attempts to ameliorate the edge distortion and contouring effects.
A finite element conjugate gradient FFT method for scattering
NASA Technical Reports Server (NTRS)
Collins, Jeffery D.; Ross, Dan; Jin, J.-M.; Chatterjee, A.; Volakis, John L.
1991-01-01
Validated results are presented for the new 3D body of revolution finite element boundary integral code. A Fourier series expansion of the vector electric and mangnetic fields is employed to reduce the dimensionality of the system, and the exact boundary condition is employed to terminate the finite element mesh. The mesh termination boundary is chosen such that is leads to convolutional boundary operatores of low O(n) memory demand. Improvements of this code are discussed along with the proposed formulation for a full 3D implementation of the finite element boundary integral method in conjunction with a conjugate gradiant fast Fourier transformation (CGFFT) solution.
Application of a VLSI vector quantization processor to real-time speech coding
NASA Technical Reports Server (NTRS)
Davidson, G.; Gersho, A.
1986-01-01
Attention is given to a working vector quantization processor for speech coding that is based on a first-generation VLSI chip which efficiently performs the pattern-matching operation needed for the codebook search process (CPS). Using this chip, the CPS architecture has been successfully incorporated into a compact, single-board Vector PCM implementation operating at 7-18 kbits/sec. A real time Adaptive Vector Predictive Coder system using the CPS has also been implemented.
Video data compression using artificial neural network differential vector quantization
NASA Technical Reports Server (NTRS)
Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.
1991-01-01
An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.
Effective Vectorization with OpenMP 4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham
This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less
NASA Technical Reports Server (NTRS)
Gray, Robert M.
1989-01-01
During the past ten years Vector Quantization (VQ) has developed from a theoretical possibility promised by Shannon's source coding theorems into a powerful and competitive technique for speech and image coding and compression at medium to low bit rates. In this survey, the basic ideas behind the design of vector quantizers are sketched and some comments made on the state-of-the-art and current research efforts.
Bobrova, E V; Bogacheva, I N; Lyakhovetskii, V A; Fabinskaja, A A; Fomina, E V
2017-01-01
In order to test the hypothesis of hemisphere specialization for different types of information coding (the right hemisphere, for positional coding; the left one, for vector coding), we analyzed the errors of right and left-handers during a task involving the memorization of sequences of movements by the left or the right hand, which activates vector coding by changing the order of movements in memorized sequences. The task was first performed by the right or the left hand, then by the opposite hand. It was found that both'right- and left-handers use the information about the previous movements of the dominant hand, but not of the non-dom" inant one. After changing the hand, right-handers use the information about previous movements of the second hand, while left-handers do not. We compared our results with the data of previous experiments, in which positional coding was activated, and concluded that both right- and left-handers use vector coding for memorizing the sequences of their dominant hands and positional coding for memorizing the sequences of non-dominant hand. No similar patterns of errors were found between right- and left-handers after changing the hand, which suggests that in right- and left-handersthe skills are transferred in different ways depending on the type of coding.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
Sakura, Midori; Lambrinos, Dimitrios; Labhart, Thomas
2008-02-01
Many insects exploit skylight polarization for visual compass orientation or course control. As found in crickets, the peripheral visual system (optic lobe) contains three types of polarization-sensitive neurons (POL neurons), which are tuned to different ( approximately 60 degrees diverging) e-vector orientations. Thus each e-vector orientation elicits a specific combination of activities among the POL neurons coding any e-vector orientation by just three neural signals. In this study, we hypothesize that in the presumed orientation center of the brain (central complex) e-vector orientation is population-coded by a set of "compass neurons." Using computer modeling, we present a neural network model transforming the signal triplet provided by the POL neurons to compass neuron activities coding e-vector orientation by a population code. Using intracellular electrophysiology and cell marking, we present evidence that neurons with the response profile of the presumed compass neurons do indeed exist in the insect brain: each of these compass neuron-like (CNL) cells is activated by a specific e-vector orientation only and otherwise remains silent. Morphologically, CNL cells are tangential neurons extending from the lateral accessory lobe to the lower division of the central body. Surpassing the modeled compass neurons in performance, CNL cells are insensitive to the degree of polarization of the stimulus between 99% and at least down to 18% polarization and thus largely disregard variations of skylight polarization due to changing solar elevations or atmospheric conditions. This suggests that the polarization vision system includes a gain control circuit keeping the output activity at a constant level.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1994-01-01
The unequal error protection capabilities of convolutional and trellis codes are studied. In certain environments, a discrepancy in the amount of error protection placed on different information bits is desirable. Examples of environments which have data of varying importance are a number of speech coding algorithms, packet switched networks, multi-user systems, embedded coding systems, and high definition television. Encoders which provide more than one level of error protection to information bits are called unequal error protection (UEP) codes. In this work, the effective free distance vector, d, is defined as an alternative to the free distance as a primary performance parameter for UEP convolutional and trellis encoders. For a given (n, k), convolutional encoder, G, the effective free distance vector is defined as the k-dimensional vector d = (d(sub 0), d(sub 1), ..., d(sub k-1)), where d(sub j), the j(exp th) effective free distance, is the lowest Hamming weight among all code sequences that are generated by input sequences with at least one '1' in the j(exp th) position. It is shown that, although the free distance for a code is unique to the code and independent of the encoder realization, the effective distance vector is dependent on the encoder realization.
Tensor Sparse Coding for Positive Definite Matrices.
Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikos
2013-08-02
In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for e.g., image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.
Tensor sparse coding for positive definite matrices.
Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2014-03-01
In recent years, there has been extensive research on sparse representation of vector-valued signals. In the matrix case, the data points are merely vectorized and treated as vectors thereafter (for example, image patches). However, this approach cannot be used for all matrices, as it may destroy the inherent structure of the data. Symmetric positive definite (SPD) matrices constitute one such class of signals, where their implicit structure of positive eigenvalues is lost upon vectorization. This paper proposes a novel sparse coding technique for positive definite matrices, which respects the structure of the Riemannian manifold and preserves the positivity of their eigenvalues, without resorting to vectorization. Synthetic and real-world computer vision experiments with region covariance descriptors demonstrate the need for and the applicability of the new sparse coding model. This work serves to bridge the gap between the sparse modeling paradigm and the space of positive definite matrices.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H. Lee; Ganti, Anand; Resnick, David R
2013-10-22
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Design, decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-06-17
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
Decoding and optimized implementation of SECDED codes over GF(q)
Ward, H Lee; Ganti, Anand; Resnick, David R
2014-11-18
A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
NASA Technical Reports Server (NTRS)
Rutishauser, David
2006-01-01
The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.
Design and construction of functional AAV vectors.
Gray, John T; Zolotukhin, Serge
2011-01-01
Using the basic principles of molecular biology and laboratory techniques presented in this chapter, researchers should be able to create a wide variety of AAV vectors for both clinical and basic research applications. Basic vector design concepts are covered for both protein coding gene expression and small non-coding RNA gene expression cassettes. AAV plasmid vector backbones (available via AddGene) are described, along with critical sequence details for a variety of modular expression components that can be inserted as needed for specific applications. Protocols are provided for assembling the various DNA components into AAV vector plasmids in Escherichia coli, as well as for transferring these vector sequences into baculovirus genomes for large-scale production of AAV in the insect cell production system.
A geo-coded inventory of anophelines in the Afrotropical Region south of the Sahara: 1898-2016.
Kyalo, David; Amratia, Punam; Mundia, Clara W; Mbogo, Charles M; Coetzee, Maureen; Snow, Robert W
2017-01-01
Background : Understanding the distribution of anopheline vectors of malaria is an important prelude to the design of national malaria control and elimination programmes. A single, geo-coded continental inventory of anophelines using all available published and unpublished data has not been undertaken since the 1960s. Methods : We have searched African, European and World Health Organization archives to identify unpublished reports on anopheline surveys in 48 sub-Saharan Africa countries. This search was supplemented by identification of reports that formed part of post-graduate theses, conference abstracts, regional insecticide resistance databases and more traditional bibliographic searches of peer-reviewed literature. Finally, a check was made against two recent repositories of dominant malaria vector species locations ( circa 2,500). Each report was used to extract information on the survey dates, village locations (geo-coded to provide a longitude and latitude), sampling methods, species identification methods and all anopheline species found present during the survey. Survey records were collapsed to a single site over time. Results : The search strategy took years and resulted in 13,331 unique, geo-coded survey locations of anopheline vector occurrence between 1898 and 2016. A total of 12,204 (92%) sites reported the presence of 10 dominant vector species/sibling species; 4,473 (37%) of these sites were sampled since 2005. 4,442 (33%) sites reported at least one of 13 possible secondary vector species; 1,107 (25%) of these sites were sampled since 2005. Distributions of dominant and secondary vectors conform to previous descriptions of the ecological ranges of these vectors. Conclusion : We have assembled the largest ever geo-coded database of anophelines in Africa, representing a legacy dataset for future updating and identification of knowledge gaps at national levels. The geo-coded database is available on Harvard Dataverse as a reference source for African national malaria control programmes planning their future control and elimination strategies.
A geo-coded inventory of anophelines in the Afrotropical Region south of the Sahara: 1898-2016
Kyalo, David; Amratia, Punam; Mundia, Clara W.; Mbogo, Charles M.; Coetzee, Maureen; Snow, Robert W.
2017-01-01
Background: Understanding the distribution of anopheline vectors of malaria is an important prelude to the design of national malaria control and elimination programmes. A single, geo-coded continental inventory of anophelines using all available published and unpublished data has not been undertaken since the 1960s. Methods: We have searched African, European and World Health Organization archives to identify unpublished reports on anopheline surveys in 48 sub-Saharan Africa countries. This search was supplemented by identification of reports that formed part of post-graduate theses, conference abstracts, regional insecticide resistance databases and more traditional bibliographic searches of peer-reviewed literature. Finally, a check was made against two recent repositories of dominant malaria vector species locations ( circa 2,500). Each report was used to extract information on the survey dates, village locations (geo-coded to provide a longitude and latitude), sampling methods, species identification methods and all anopheline species found present during the survey. Survey records were collapsed to a single site over time. Results: The search strategy took years and resulted in 13,331 unique, geo-coded survey locations of anopheline vector occurrence between 1898 and 2016. A total of 12,204 (92%) sites reported the presence of 10 dominant vector species/sibling species; 4,473 (37%) of these sites were sampled since 2005. 4,442 (33%) sites reported at least one of 13 possible secondary vector species; 1,107 (25%) of these sites were sampled since 2005. Distributions of dominant and secondary vectors conform to previous descriptions of the ecological ranges of these vectors. Conclusion: We have assembled the largest ever geo-coded database of anophelines in Africa, representing a legacy dataset for future updating and identification of knowledge gaps at national levels. The geo-coded database is available on Harvard Dataverse as a reference source for African national malaria control programmes planning their future control and elimination strategies. PMID:28884158
Adaptive error correction codes for face identification
NASA Astrophysics Data System (ADS)
Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.
2012-06-01
Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.
The ASC Sequoia Programming Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, M
2008-08-06
In the late 1980's and early 1990's, Lawrence Livermore National Laboratory was deeply engrossed in determining the next generation programming model for the Integrated Design Codes (IDC) beyond vectorization for the Cray 1s series of computers. The vector model, developed in mid 1970's first for the CDC 7600 and later extended from stack based vector operation to memory to memory operations for the Cray 1s, lasted approximately 20 years (See Slide 5). The Cray vector era was deemed an extremely long lived era as it allowed vector codes to be developed over time (the Cray 1s were faster in scalarmore » mode than the CDC 7600) with vector unit utilization increasing incrementally over time. The other attributes of the Cray vector era at LLNL were that we developed, supported and maintained the Operating System (LTSS and later NLTSS), communications protocols (LINCS), Compilers (Civic Fortran77 and Model), operating system tools (e.g., batch system, job control scripting, loaders, debuggers, editors, graphics utilities, you name it) and math and highly machine optimized libraries (e.g., SLATEC, and STACKLIB). Although LTSS was adopted by Cray for early system generations, they later developed COS and UNICOS operating systems and environment on their own. In the late 1970s and early 1980s two trends appeared that made the Cray vector programming model (described above including both the hardware and system software aspects) seem potentially dated and slated for major revision. These trends were the appearance of low cost CMOS microprocessors and their attendant, departmental and mini-computers and later workstations and personal computers. With the wide spread adoption of Unix in the early 1980s, it appeared that LLNL (and the other DOE Labs) would be left out of the mainstream of computing without a rapid transition to these 'Killer Micros' and modern OS and tools environments. The other interesting advance in the period is that systems were being developed with multiple 'cores' in them and called Symmetric Multi-Processor or Shared Memory Processor (SMP) systems. The parallel revolution had begun. The Laboratory started a small 'parallel processing project' in 1983 to study the new technology and its application to scientific computing with four people: Tim Axelrod, Pete Eltgroth, Paul Dubois and Mark Seager. Two years later, Eugene Brooks joined the team. This team focused on Unix and 'killer micro' SMPs. Indeed, Eugene Brooks was credited with coming up with the 'Killer Micro' term. After several generations of SMP platforms (e.g., Sequent Balance 8000 with 8 33MHz MC32032s, Allian FX8 with 8 MC68020 and FPGA based Vector Units and finally the BB&N Butterfly with 128 cores), it became apparent to us that the killer micro revolution would indeed take over Crays and that we definitely needed a new programming and systems model. The model developed by Mark Seager and Dale Nielsen focused on both the system aspects (Slide 3) and the code development aspects (Slide 4). Although now succinctly captured in two attached slides, at the time there was tremendous ferment in the research community as to what parallel programming model would emerge, dominate and survive. In addition, we wanted a model that would provide portability between platforms of a single generation but also longevity over multiple--and hopefully--many generations. Only after we developed the 'Livermore Model' and worked it out in considerable detail did it become obvious that what we came up with was the right approach. In a nutshell, the applications programming model of the Livermore Model posited that SMP parallelism would ultimately not scale indefinitely and one would have to bite the bullet and implement MPI parallelism within the Integrated Design Code (IDC). We also had a major emphasis on doing everything in a completely standards based, portable methodology with POSIX/Unix as the target environment. We decided against specialized libraries like STACKLIB for performance, but kept as many general purpose, portable math libraries as were needed by the codes. Third, we assumed that the SMPs in clusters would evolve in time to become more powerful, feature rich and, in particular, offer more cores. Thus, we focused on OpenMP, and POSIX PThreads for programming SMP parallelism. These code porting efforts were lead by Dale Nielsen, A-Division code group leader, and Randy Christensen, B-Division code group leader. Most of the porting effort revolved removing 'Crayisms' in the codes: artifacts of LTSS/NLTSS, Civic compiler extensions beyond Fortran77, IO libraries and dealing with new code control languages (we switched to Perl and later to Python). Adding MPI to the codes was initially problematic and error prone because the programmers used MPI directly and sprinkled the calls throughout the code.« less
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
Monte Carlo simulation of Ising models by multispin coding on a vector computer
NASA Astrophysics Data System (ADS)
Wansleben, Stephan; Zabolitzky, John G.; Kalle, Claus
1984-11-01
Rebbi's efficient multispin coding algorithm for Ising models is combined with the use of the vector computer CDC Cyber 205. A speed of 21.2 million updates per second is reached. This is comparable to that obtained by special- purpose computers.
Using a multifrontal sparse solver in a high performance, finite element code
NASA Technical Reports Server (NTRS)
King, Scott D.; Lucas, Robert; Raefsky, Arthur
1990-01-01
We consider the performance of the finite element method on a vector supercomputer. The computationally intensive parts of the finite element method are typically the individual element forms and the solution of the global stiffness matrix both of which are vectorized in high performance codes. To further increase throughput, new algorithms are needed. We compare a multifrontal sparse solver to a traditional skyline solver in a finite element code on a vector supercomputer. The multifrontal solver uses the Multiple-Minimum Degree reordering heuristic to reduce the number of operations required to factor a sparse matrix and full matrix computational kernels (e.g., BLAS3) to enhance vector performance. The net result in an order-of-magnitude reduction in run time for a finite element application on one processor of a Cray X-MP.
Gain-adaptive vector quantization for medium-rate speech coding
NASA Technical Reports Server (NTRS)
Chen, J.-H.; Gersho, A.
1985-01-01
A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.
Pulse Vector-Excitation Speech Encoder
NASA Technical Reports Server (NTRS)
Davidson, Grant; Gersho, Allen
1989-01-01
Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.
Niarchos, Athanasios; Siora, Anastasia; Konstantinou, Evangelia; Kalampoki, Vasiliki; Lagoumintzis, George; Poulas, Konstantinos
2017-01-01
During the last few decades, the recombinant protein expression finds more and more applications. The cloning of protein-coding genes into expression vectors is required to be directional for proper expression, and versatile in order to facilitate gene insertion in multiple different vectors for expression tests. In this study, the TA-GC cloning method is proposed, as a new, simple and efficient method for the directional cloning of protein-coding genes in expression vectors. The presented method features several advantages over existing methods, which tend to be relatively more labour intensive, inflexible or expensive. The proposed method relies on the complementarity between single A- and G-overhangs of the protein-coding gene, obtained after a short incubation with T4 DNA polymerase, and T and C overhangs of the novel vector pET-BccI, created after digestion with the restriction endonuclease BccI. The novel protein-expression vector pET-BccI also facilitates the screening of transformed colonies for recombinant transformants. Evaluation experiments of the proposed TA-GC cloning method showed that 81% of the transformed colonies contained recombinant pET-BccI plasmids, and 98% of the recombinant colonies expressed the desired protein. This demonstrates that TA-GC cloning could be a valuable method for cloning protein-coding genes in expression vectors.
Niarchos, Athanasios; Siora, Anastasia; Konstantinou, Evangelia; Kalampoki, Vasiliki; Poulas, Konstantinos
2017-01-01
During the last few decades, the recombinant protein expression finds more and more applications. The cloning of protein-coding genes into expression vectors is required to be directional for proper expression, and versatile in order to facilitate gene insertion in multiple different vectors for expression tests. In this study, the TA-GC cloning method is proposed, as a new, simple and efficient method for the directional cloning of protein-coding genes in expression vectors. The presented method features several advantages over existing methods, which tend to be relatively more labour intensive, inflexible or expensive. The proposed method relies on the complementarity between single A- and G-overhangs of the protein-coding gene, obtained after a short incubation with T4 DNA polymerase, and T and C overhangs of the novel vector pET-BccI, created after digestion with the restriction endonuclease BccI. The novel protein-expression vector pET-BccI also facilitates the screening of transformed colonies for recombinant transformants. Evaluation experiments of the proposed TA-GC cloning method showed that 81% of the transformed colonies contained recombinant pET-BccI plasmids, and 98% of the recombinant colonies expressed the desired protein. This demonstrates that TA-GC cloning could be a valuable method for cloning protein-coding genes in expression vectors. PMID:29091919
Code Samples Used for Complexity and Control
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents
Nance, Michael E; Duan, Dongsheng
2015-12-01
Duchenne muscular dystrophy (DMD) is a X-linked, progressive childhood myopathy caused by mutations in the dystrophin gene, one of the largest genes in the genome. It is characterized by skeletal and cardiac muscle degeneration and dysfunction leading to cardiac and/or respiratory failure. Adeno-associated virus (AAV) is a highly promising gene therapy vector. AAV gene therapy has resulted in unprecedented clinical success for treating several inherited diseases. However, AAV gene therapy for DMD remains a significant challenge. Hurdles for AAV-mediated DMD gene therapy include the difficulty to package the full-length dystrophin coding sequence in an AAV vector, the necessity for whole-body gene delivery, the immune response to dystrophin and AAV capsid, and the species-specific barriers to translate from animal models to human patients. Capsid engineering aims at improving viral vector properties by rational design and/or forced evolution. In this review, we discuss how to use the state-of-the-art AAV capsid engineering technologies to overcome hurdles in AAV-based DMD gene therapy.
Computing element evolution towards Exascale and its impact on legacy simulation codes
NASA Astrophysics Data System (ADS)
Colin de Verdière, Guillaume J. L.
2015-12-01
In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes.
Deploying electromagnetic particle-in-cell (EM-PIC) codes on Xeon Phi accelerators boards
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2014-10-01
The complexity of the phenomena involved in several relevant plasma physics scenarios, where highly nonlinear and kinetic processes dominate, makes purely theoretical descriptions impossible. Further understanding of these scenarios requires detailed numerical modeling, but fully relativistic particle-in-cell codes such as OSIRIS are computationally intensive. The quest towards Exaflop computer systems has lead to the development of HPC systems based on add-on accelerator cards, such as GPGPUs and more recently the Xeon Phi accelerators that power the current number 1 system in the world. These cards, also referred to as Intel Many Integrated Core Architecture (MIC) offer peak theoretical performances of >1 TFlop/s for general purpose calculations in a single board, and are receiving significant attention as an attractive alternative to CPUs for plasma modeling. In this work we report on our efforts towards the deployment of an EM-PIC code on a Xeon Phi architecture system. We will focus on the parallelization and vectorization strategies followed, and present a detailed performance evaluation of code performance in comparison with the CPU code.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
Propagation and scattering of vector light beam in turbid scattering medium
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Milione, Giovanni; Meglinski, Igor; Alfano, Robert R.
2014-03-01
Due to its high sensitivity to subtle alterations in medium morphology the vector light beams have recently gained much attention in the area of photonics. This leads to development of a new non-invasive optical technique for tissue diagnostics. Conceptual design of the particular experimental systems requires careful selection of various technical parameters, including beam structure, polarization, coherence, wavelength of incident optical radiation, as well as an estimation of how the spatial and temporal structural alterations in biological tissues can be distinguished by variations of these parameters. Therefore, an accurate realistic description of vector light beams propagation within tissue-like media is required. To simulate and mimic the propagation of vector light beams within the turbid scattering media the stochastic Monte Carlo (MC) technique has been used. In current report we present the developed MC model and the results of simulation of different vector light beams propagation in turbid tissue-like scattering media. The developed MC model takes into account the coherent properties of light, the influence of reflection and refraction at the medium boundary, helicity flip of vortexes and their mutual interference. Finally, similar to the concept of higher order Poincaŕe sphere (HOPS), to link the spatial distribution of the intensity of the backscattered vector light beam and its state of polarization on the medium surface we introduced the color-coded HOPS.
Fuzzy support vector machines for adaptive Morse code recognition.
Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh
2006-11-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.
NASA Technical Reports Server (NTRS)
Rathjen, K. A.
1977-01-01
A digital computer code CAVE (Conduction Analysis Via Eigenvalues), which finds application in the analysis of two dimensional transient heating of hypersonic vehicles is described. The CAVE is written in FORTRAN 4 and is operational on both IBM 360-67 and CDC 6600 computers. The method of solution is a hybrid analytical numerical technique that is inherently stable permitting large time steps even with the best of conductors having the finest of mesh size. The aerodynamic heating boundary conditions are calculated by the code based on the input flight trajectory or can optionally be calculated external to the code and then entered as input data. The code computes the network conduction and convection links, as well as capacitance values, given basic geometrical and mesh sizes, for four generations (leading edges, cooled panels, X-24C structure and slabs). Input and output formats are presented and explained. Sample problems are included. A brief summary of the hybrid analytical-numerical technique, which utilizes eigenvalues (thermal frequencies) and eigenvectors (thermal mode vectors) is given along with aerodynamic heating equations that have been incorporated in the code and flow charts.
NASA Technical Reports Server (NTRS)
Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.
1983-01-01
Volume 3 of a 3-volume technical memoranda which contains documentation of the GLAS fourth order genera circulation model is presented. The volume contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A dictionary of FORTRAN variables used in the Scalar Version, and listings of the FORTRAN Code compiled with the C-option, are included. Cross reference maps of local variables are included for each subroutine.
Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
Holzrichter, J.F.; Ng, L.C.
1998-03-17
The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.
Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
Holzrichter, John F.; Ng, Lawrence C.
1998-01-01
The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.
Horizontal vectorization of electron repulsion integrals.
Pritchard, Benjamin P; Chow, Edmond
2016-10-30
We present an efficient implementation of the Obara-Saika algorithm for the computation of electron repulsion integrals that utilizes vector intrinsics to calculate several primitive integrals concurrently in a SIMD vector. Initial benchmarks display a 2-4 times speedup with AVX instructions over comparable scalar code, depending on the basis set. Speedup over scalar code is found to be sensitive to the level of contraction of the basis set, and is best for (lAlB|lClD) quartets when lD = 0 or lB=lD=0, which makes such a vectorization scheme particularly suitable for density fitting. The basic Obara-Saika algorithm, how it is vectorized, and the performance bottlenecks are analyzed and discussed. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Vectors a Fortran 90 module for 3-dimensional vector and dyadic arithmetic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, B.C.
1998-02-01
A major advance contained in the new Fortran 90 language standard is the ability to define new data types and the operators associated with them. Writing computer code to implement computations with real and complex three-dimensional vectors and dyadics is greatly simplified if the equations can be implemented directly, without the need to code the vector arithmetic explicitly. The Fortran 90 module described here defines new data types for real and complex 3-dimensional vectors and dyadics, along with the common operations needed to work with these objects. Routines to allow convenient initialization and output of the new types are alsomore » included. In keeping with the philosophy of data abstraction, the details of the implementation of the data types are maintained private, and the functions and operators are made generic to simplify the combining of real, complex, single- and double-precision vectors and dyadics.« less
Casales, Erkuden; Aranda, Alejandro; Quetglas, Jose I; Ruiz-Guillen, Marta; Rodriguez-Madoz, Juan R; Prieto, Jesus; Smerdou, Cristian
2010-05-31
Semliki Forest virus (SFV) vectors lead to high protein expression in mammalian cells, but expression is transient due to vector cytopathic effects, inhibition of host cell proteins and RNA-based expression. We have used a noncytopathic SFV mutant (ncSFV) RNA vector to generate stable cell lines expressing two human therapeutic proteins: insulin-like growth factor I (IGF-I) and cardiotrophin-1 (CT-1). Therapeutic genes were fused at the carboxy-terminal end of Puromycin N-acetyl-transferase gene by using as a linker the sequence coding for foot-and-mouth disease virus (FMDV) 2A autoprotease. These cassettes were cloned into the ncSFV vector. Recombinant ncSFV vectors allowed rapid and efficient selection of stable BHK cell lines with puromycin. These cells expressed IGF-I and CT-1 in supernatants at levels reaching 1.4 and 8.6 microg/10(6)cells/24 hours, respectively. Two cell lines generated with each vector were passaged ten times during 30 days, showing constant levels of protein expression. Recombinant proteins expressed at different passages were functional by in vitro signaling assays. Stability at RNA level was unexpectedly high, showing a very low mutation rate in the CT-1 sequence, which did not increase at high passages. CT-1 was efficiently purified from supernatants of ncSFV cell lines, obtaining a yield of approximately 2mg/L/24 hours. These results indicate that the ncSFV vector has a great potential for the production of recombinant proteins in mammalian cells. 2010 Elsevier B.V. All rights reserved.
Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry
1987-01-01
Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.
NASA Technical Reports Server (NTRS)
McGuire, Tim
1998-01-01
In this paper, we report the results of our recent research on the application of a multiprocessor Cray T916 supercomputer in modeling super-thermal electron transport in the earth's magnetic field. In general, this mathematical model requires numerical solution of a system of partial differential equations. The code we use for this model is moderately vectorized. By using Amdahl's Law for vector processors, it can be verified that the code is about 60% vectorized on a Cray computer. Speedup factors on the order of 2.5 were obtained compared to the unvectorized code. In the following sections, we discuss the methodology of improving the code. In addition to our goal of optimizing the code for solution on the Cray computer, we had the goal of scalability in mind. Scalability combines the concepts of portabilty with near-linear speedup. Specifically, a scalable program is one whose performance is portable across many different architectures with differing numbers of processors for many different problem sizes. Though we have access to a Cray at this time, the goal was to also have code which would run well on a variety of architectures.
Simulations of linear and Hamming codes using SageMath
NASA Astrophysics Data System (ADS)
Timur, Tahta D.; Adzkiya, Dieky; Soleha
2018-03-01
Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.
Speech coding, reconstruction and recognition using acoustics and electromagnetic waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzrichter, J.F.; Ng, L.C.
The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less
NASA Astrophysics Data System (ADS)
Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard
2017-07-01
Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Zhou, Wen; Li, Xinying; Yu, Jianjun
2017-10-30
We propose QPSK millimeter-wave (mm-wave) vector signal generation for D-band based on balanced precoding-assisted photonic frequency quadrupling technology employing a single intensity modulator without an optical filter. The intensity MZM is driven by a balanced pre-coding 37-GHz QPSK RF signal. The modulated optical subcarriers are directly sent into the single ended photodiode to generate 148-GHz QPSK vector signal. We experimentally demonstrate 1-Gbaud 148-GHz QPSK mm-wave vector signal generation, and investigate the bit-error-rate (BER) performance of the vector signals at 148-GHz. The experimental results show that the BER value can be achieved as low as 1.448 × 10 -3 when the optical power into photodiode is 8.8dBm. To the best of our knowledge, it is the first time to realize the frequency-quadrupling vector mm-wave signal generation at D-band based on only one MZM without an optical filter.
Low bit rate coding of Earth science images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1993-01-01
In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.
An algebraic hypothesis about the primeval genetic code architecture.
Sánchez, Robersy; Grau, Ricardo
2009-09-01
A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D,A,C,G,U}, where symbol D represents one or more hypothetical bases with unspecific pairings. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvement of a primeval DNA repair system could make possible the transition from ancient to modern genetic codes. Our results suggest that the Watson-Crick base pairing G identical with C and A=U and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as, the transition from the former to the latter. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences. The phylogenetic analyses achieved with metrics defined in the N-dimensional vector space (B(3))(N) of DNA sequences and with the new evolutionary model presented here also suggest that an ancient DNA coding sequence with five or more bases does not contradict the expected evolutionary history.
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
Vincenti, H.; Lobet, M.; Lehe, R.; ...
2016-09-19
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries: OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vincenti, H.; Lobet, M.; Lehe, R.
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈20pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD registermore » length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scat;ter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of ×2 to ×2.5 speed-up in double precision for particle shape factor of orders 1–3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles). Program summary Program Title: vec_deposition Program Files doi:http://dx.doi.org/10.17632/nh77fv9k8c.1 Licensing provisions: BSD 3-Clause Programming language: Fortran 90 External routines/libraries: OpenMP > 4.0 Nature of problem: Exascale architectures will have many-core processors per node with long vector data registers capable of performing one single instruction on multiple data during one clock cycle. Data register lengths are expected to double every four years and this pushes for new portable solutions for efficiently vectorizing Particle-In-Cell codes on these future many-core architectures. One of the main hotspot routines of the PIC algorithm is the current/charge deposition for which there is no efficient and portable vector algorithm. Solution method: Here we provide an efficient and portable vector algorithm of current/charge deposition routines that uses a new data structure, which significantly reduces gather/scatter operations. Vectorization is controlled using OpenMP 4.0 compiler directives for vectorization which ensures portability across different architectures. Restrictions: Here we do not provide the full PIC algorithm with an executable but only vector routines for current/charge deposition. These scalar/vector routines can be used as library routines in your 3D Particle-In-Cell code. However, to get the best performances out of vector routines you have to satisfy the two following requirements: (1) Your code should implement particle tiling (as explained in the manuscript) to allow for maximized cache reuse and reduce memory accesses that can hinder vector performances. The routines can be used directly on each particle tile. (2) You should compile your code with a Fortran 90 compiler (e.g Intel, gnu or cray) and provide proper alignment flags and compiler alignment directives (more details in README file).« less
Etiology of work-related electrical injuries: a narrative analysis of workers' compensation claims.
Lombardi, David A; Matz, Simon; Brennan, Melanye J; Smith, Gordon S; Courtney, Theodore K
2009-10-01
The purpose of this study was to provide new insight into the etiology of primarily nonfatal, work-related electrical injuries. We developed a multistage, case-selection algorithm to identify electrical-related injuries from workers' compensation claims and a customized coding taxonomy to identify pre-injury circumstances. Workers' compensation claims routinely collected over a 1-year period from a large U.S. insurance provider were used to identify electrical-related injuries using an algorithm that evaluated: coded injury cause information, nature of injury, "accident" description, and injury description narratives. Concurrently, a customized coding taxonomy for these narratives was developed to abstract the activity, source, initiating process, mechanism, vector, and voltage. Among the 586,567 reported claims during 2002, electrical-related injuries accounted for 1283 (0.22%) of nonfatal claims and 15 fatalities (1.2% of electrical). Most (72.3%) were male, average age of 36, working in services (33.4%), manufacturing (24.7%), retail trade (17.3%), and construction (7.2%). Body part(s) injured most often were the hands, fingers, or wrist (34.9%); multiple body parts/systems (25.0%); lower/upper arm; elbow; shoulder, and upper extremities (19.2%). The leading activities were conducting manual tasks (55.1%); working with machinery, appliances, or equipment; working with electrical wire; and operating powered or nonpowered hand tools. Primary injury sources were appliances and office equipment (24.4%); wires, cables/cords (18.0%); machines and other equipment (11.8%); fixtures, bulbs, and switches (10.4%); and lightning (4.3%). No vector was identified in 85% of cases. and the work process was initiated by others in less than 1% of cases. Injury narratives provide valuable information to overcome some of the limitations of precoded data, more specially for identifying additional injury cases and in supplementing traditional epidemiologic data for further understanding the etiology of work-related electrical injuries that may lead to further prevention opportunities.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
Vector systems for prenatal gene therapy: principles of retrovirus vector design and production.
Howe, Steven J; Chandrashekran, Anil
2012-01-01
Vectors derived from the Retroviridae family have several attributes required for successful gene delivery. Retroviral vectors have an adequate payload size for the coding regions of most genes; they are safe to handle and simple to produce. These vectors can be manipulated to target different cell types with low immunogenicity and can permanently insert genetic information into the host cells' genome. Retroviral vectors have been used in gene therapy clinical trials and successfully applied experimentally in vitro, in vivo, and in utero.
Transferring ecosystem simulation codes to supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1995-01-01
Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
4800 B/S speech compression techniques for mobile satellite systems
NASA Technical Reports Server (NTRS)
Townes, S. A.; Barnwell, T. P., III; Rose, R. C.; Gersho, A.; Davidson, G.
1986-01-01
This paper will discuss three 4800 bps digital speech compression techniques currently being investigated for application in the mobile satellite service. These three techniques, vector adaptive predictive coding, vector excitation coding, and the self excited vocoder, are the most promising among a number of techniques being developed to possibly provide near-toll-quality speech compression while still keeping the bit-rate low enough for a power and bandwidth limited satellite service.
NASA Technical Reports Server (NTRS)
Kumar, A.
1984-01-01
A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.
NASA Astrophysics Data System (ADS)
Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei
2016-06-01
Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.
Optimizing fusion PIC code performance at scale on Cori Phase 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, T. S.; Deslippe, J.
In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale wellmore » up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.« less
van Herpen, Gerard
2014-01-01
Einthoven not only designed a high quality instrument, the string galvanometer, for recording the ECG, he also shaped the conceptual framework to understand it. He reduced the body to an equilateral triangle and the cardiac electric activity to a dipole, represented by an arrow (i.e. a vector) in the triangle's center. Up to the present day the interpretation of the ECG is based on the model of a dipole vector being projected on the various leads. The model is practical but intuitive, not physically founded. Burger analysed the relation between heart vector and leads according to the principles of physics. It then follows that an ECG lead must be treated as a vector (lead vector) and that the lead voltage is not simply proportional to the projection of the vector on the lead, but must be multiplied by the value (length) of the lead vector, the lead strength. Anatomical lead axis and electrical lead axis are different entities and the anatomical body space must be distinguished from electrical space. Appreciation of these underlying physical principles should contribute to a better understanding of the ECG. The development of these principles by Burger is described, together with some personal notes and a sketch of the personality of this pioneer of medical physics. Copyright © 2014. Published by Elsevier Inc.
Language Recognition via Sparse Coding
2016-09-08
a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector
Wu, Howard G.
2013-01-01
The planning of goal-directed movements is highly adaptable; however, the basic mechanisms underlying this adaptability are not well understood. Even the features of movement that drive adaptation are hotly debated, with some studies suggesting remapping of goal locations and others suggesting remapping of the movement vectors leading to goal locations. However, several previous motor learning studies and the multiplicity of the neural coding underlying visually guided reaching movements stand in contrast to this either/or debate on the modes of motor planning and adaptation. Here we hypothesize that, during visuomotor learning, the target location and movement vector of trained movements are separately remapped, and we propose a novel computational model for how motor plans based on these remappings are combined during the control of visually guided reaching in humans. To test this hypothesis, we designed a set of experimental manipulations that effectively dissociated the effects of remapping goal location and movement vector by examining the transfer of visuomotor adaptation to untrained movements and movement sequences throughout the workspace. The results reveal that (1) motor adaptation differentially remaps goal locations and movement vectors, and (2) separate motor plans based on these features are effectively averaged during motor execution. We then show that, without any free parameters, the computational model we developed for combining movement-vector-based and goal-location-based planning predicts nearly 90% of the variance in novel movement sequences, even when multiple attributes are simultaneously adapted, demonstrating for the first time the ability to predict how motor adaptation affects movement sequence planning. PMID:23804099
Supporting the Virtual Soldier With a Physics-Based Software Architecture
2005-06-01
simple approach taken here). Rather, this paper demonstrates how existing solution schemes can rapidly expand; it embraces all theoretical solution... bodyj . In (5) the superscript ’T’ accompanying a vector denotes the transposition of the vector. The constraint force and moment are defined as F C=Z1 a a...FE codes as there are meshes, and the requested MD code. This is described next. Exactly how the PM instantiated each physics process became an issue
New syndrome decoder for (n, 1) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
The letter presents a new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck. The new technique uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). A recursive, Viterbi-like, algorithm is developed to find the minimum weight error vector E(D). An example is given for the binary nonsystematic (2, 1) CC.
Learning Compact Binary Face Descriptor for Face Recognition.
Lu, Jiwen; Liong, Venice Erin; Zhou, Xiuzhuang; Zhou, Jie
2015-10-01
Binary feature descriptors such as local binary patterns (LBP) and its variations have been widely used in many face recognition systems due to their excellent robustness and strong discriminative power. However, most existing binary face descriptors are hand-crafted, which require strong prior knowledge to engineer them by hand. In this paper, we propose a compact binary face descriptor (CBFD) feature learning method for face representation and recognition. Given each face image, we first extract pixel difference vectors (PDVs) in local patches by computing the difference between each pixel and its neighboring pixels. Then, we learn a feature mapping to project these pixel difference vectors into low-dimensional binary vectors in an unsupervised manner, where 1) the variance of all binary codes in the training set is maximized, 2) the loss between the original real-valued codes and the learned binary codes is minimized, and 3) binary codes evenly distribute at each learned bin, so that the redundancy information in PDVs is removed and compact binary codes are obtained. Lastly, we cluster and pool these binary codes into a histogram feature as the final representation for each face image. Moreover, we propose a coupled CBFD (C-CBFD) method by reducing the modality gap of heterogeneous faces at the feature level to make our method applicable to heterogeneous face recognition. Extensive experimental results on five widely used face datasets show that our methods outperform state-of-the-art face descriptors.
MHD thrust vectoring of a rocket engine
NASA Astrophysics Data System (ADS)
Labaune, Julien; Packan, Denis; Tholin, Fabien; Chemartin, Laurent; Stillace, Thierry; Masson, Frederic
2016-09-01
In this work, the possibility to use MagnetoHydroDynamics (MHD) to vectorize the thrust of a solid propellant rocket engine exhaust is investigated. Using a magnetic field for vectoring offers a mass gain and a reusability advantage compared to standard gimbaled, elastomer-joint systems. Analytical and numerical models were used to evaluate the flow deviation with a 1 Tesla magnetic field inside the nozzle. The fluid flow in the resistive MHD approximation is calculated using the KRONOS code from ONERA, coupling the hypersonic CFD platform CEDRE and the electrical code SATURNE from EDF. A critical parameter of these simulations is the electrical conductivity, which was evaluated using a set of equilibrium calculations with 25 species. Two models were used: local thermodynamic equilibrium and frozen flow. In both cases, chlorine captures a large fraction of free electrons, limiting the electrical conductivity to a value inadequate for thrust vectoring applications. However, when using chlorine-free propergols with 1% in mass of alkali, an MHD thrust vectoring of several degrees was obtained.
Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers
NASA Technical Reports Server (NTRS)
Skiles, J. W.; Schulbach, C. H.
1994-01-01
Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.
Jenkins, Adam M; Waterhouse, Robert M; Muskavitch, Marc A T
2015-04-23
Long non-coding RNAs (lncRNAs) have been defined as mRNA-like transcripts longer than 200 nucleotides that lack significant protein-coding potential, and many of them constitute scaffolds for ribonucleoprotein complexes with critical roles in epigenetic regulation. Various lncRNAs have been implicated in the modulation of chromatin structure, transcriptional and post-transcriptional gene regulation, and regulation of genomic stability in mammals, Caenorhabditis elegans, and Drosophila melanogaster. The purpose of this study is to identify the lncRNA landscape in the malaria vector An. gambiae and assess the evolutionary conservation of lncRNAs and their secondary structures across the Anopheles genus. Using deep RNA sequencing of multiple Anopheles gambiae life stages, we have identified 2,949 lncRNAs and more than 300 previously unannotated putative protein-coding genes. The lncRNAs exhibit differential expression profiles across life stages and adult genders. We find that across the genus Anopheles, lncRNAs display much lower sequence conservation than protein-coding genes. Additionally, we find that lncRNA secondary structure is highly conserved within the Gambiae complex, but diverges rapidly across the rest of the genus Anopheles. This study offers one of the first lncRNA secondary structure analyses in vector insects. Our description of lncRNAs in An. gambiae offers the most comprehensive genome-wide insights to date into lncRNAs in this vector mosquito, and defines a set of potential targets for the development of vector-based interventions that may further curb the human malaria burden in disease-endemic countries.
Optimization of large matrix calculations for execution on the Cray X-MP vector supercomputer
NASA Technical Reports Server (NTRS)
Hornfeck, William A.
1988-01-01
A considerable volume of large computational computer codes were developed for NASA over the past twenty-five years. This code represents algorithms developed for machines of earlier generation. With the emergence of the vector supercomputer as a viable, commercially available machine, an opportunity exists to evaluate optimization strategies to improve the efficiency of existing software. This result is primarily due to architectural differences in the latest generation of large-scale machines and the earlier, mostly uniprocessor, machines. A sofware package being used by NASA to perform computations on large matrices is described, and a strategy for conversion to the Cray X-MP vector supercomputer is also described.
Thyra Abstract Interface Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe A.
2005-09-01
Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilities to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Codemore » also currently exists for testing objects and providing composite objects such as product vectors.« less
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
Entropic Lattice Boltzmann Simulations of Turbulence
NASA Astrophysics Data System (ADS)
Keating, Brian; Vahala, George; Vahala, Linda; Soe, Min; Yepez, Jeffrey
2006-10-01
Because of its simplicity, nearly perfect parallelization and vectorization on supercomputer platforms, lattice Boltzmann (LB) methods hold great promise for simulations of nonlinear physics. Indeed, our MHD-LB code has the best sustained performance/PE of any code on the Earth Simulator. By projecting into the higher dimensional kinetic phase space, the solution trajectory is simpler and much easier to compute than standard CFD approach. However, simple LB -- with its simple advection and local BGK collisional relaxation -- does not impose positive definiteness of the distribution functions in the time evolution. This leads to numerical instabilities for very low transport coefficients. In Entropic LB (ELB) one determines a discrete H-theorem and the equilibrium distribution functions subject to the collisional invariants. The ELB algorithm is unconditionally stable to arbitrary small transport coefficients. Various choices of velocity discretization are examined: 15, 19 and 27-bit ELB models. The connection between Tsallis and Boltzmann entropies are clarified.
Acoustic 3D modeling by the method of integral equations
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
Adaptive Hybrid Picture Coding. Volume 2.
1985-02-01
ooo5 V.a Measurement Vector ..eho..............57 V.b Size Variable o .entroi* Vector .......... .- 59 V * c Shape Vector .Ř 0-60o oe 6 I V~d...the Program for the Adaptive Line of Sight Method .i.. 18.. o ... .... .... 1 B Details of the Feature Vector FormationProgram .. o ...oo..-....- .122 C ...shape recognition is analogous to recognition of curves in space. Therefore, well known concepts and theorems from differential geometry can be 34 . o
Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-900 series system
NASA Technical Reports Server (NTRS)
Raphael, David
1995-01-01
This handbook explicates the hardware and software properties of a time division multiplex system. This system is used to sample analog and digital data. The data is then merged with frame synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information in this handbook is required by users to design congruous interface and attest effective utilization of this encoder system. Aydin Vector provides all of the components for these systems to Goddard Space Flight Center/Wallops Flight Facility.
Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code
NASA Technical Reports Server (NTRS)
Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.
1983-01-01
Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
A VLSI chip set for real time vector quantization of image sequences
NASA Technical Reports Server (NTRS)
Baker, Richard L.
1989-01-01
The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Man, Sumche; Maan, Arie C; Schalij, Martin J; Swenne, Cees A
2015-01-01
In the course of time, electrocardiography has assumed several modalities with varying electrode numbers, electrode positions and lead systems. 12-lead electrocardiography and 3-lead vectorcardiography have become particularly popular. These modalities developed in parallel through the mid-twentieth century. In the same time interval, the physical concepts underlying electrocardiography were defined and worked out. In particular, the vector concept (heart vector, lead vector, volume conductor) appeared to be essential to understanding the manifestations of electrical heart activity, both in the 12-lead electrocardiogram (ECG) and in the 3-lead vectorcardiogram (VCG). Not universally appreciated in the clinic, the vectorcardiogram, and with it the vector concept, went out of use. A revival of vectorcardiography started in the 90's, when VCGs were mathematically synthesized from standard 12-lead ECGs. This facilitated combined electrocardiography and vectorcardiography without the need for a special recording system. This paper gives an overview of these historical developments, elaborates on the vector concept and seeks to define where VCG analysis/interpretation can add diagnostic/prognostic value to conventional 12-lead ECG analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
Serratrice, Nicolas; Cubizolle, Aurelie; Ibanes, Sandy; Mestre-Francés, Nadine; Bayo-Puxan, Neus; Creyssels, Sophie; Gennetier, Aurelie; Bernex, Florence; Verdier, Jean-Michel; Haskins, Mark E.; Couderc, Guilhem; Malecaze, Francois; Kalatzis, Vasiliki; Kremer, Eric J.
2015-01-01
Corneal transparency is maintained, in part, by specialized fibroblasts called keratocytes, which reside in the fibrous lamellae of the stroma. Corneal clouding, a condition that impairs visual acuity, is associated with numerous diseases, including mucopolysaccharidosis (MPS) type VII. MPS VII is due to deficiency in β-glucuronidase (β-glu) enzymatic activity, which leads to accumulation of glycosaminoglycans (GAGs), and secondary accumulation of gangliosides. Here, we tested the efficacy of canine adenovirus type 2 (CAV-2) vectors to transduce keratocyte in vivo in mice and nonhuman primates, and ex vivo in dog and human corneal explants. Following efficacy studies, we asked if we could treat corneal clouding by the injection a helper-dependent (HD) CAV-2 vector (HD-RIGIE) harboring the human cDNA coding for β-glu (GUSB) in the canine MPS VII cornea. β-Glu activity, GAG content, and lysosome morphology and physiopathology were analyzed. We found that HD-RIGIE injections efficiently transduced coxsackievirus adenovirus receptor-expressing keratocytes in the four species and, compared to mock-injected controls, improved the pathology in the canine MPS VII cornea. The key criterion to corrective therapy was the steady controlled release of β-glu and its diffusion throughout the collagen-dense stroma. These data support the continued evaluation of HD CAV-2 vectors to treat diseases affecting corneal keratocytes. PMID:24607662
NASA Astrophysics Data System (ADS)
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
An adaptive vector quantization scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1990-01-01
Vector quantization is known to be an effective compression scheme to achieve a low bit rate so as to minimize communication channel bandwidth and also to reduce digital memory storage while maintaining the necessary fidelity of the data. However, the large number of computations required in vector quantizers has been a handicap in using vector quantization for low-rate source coding. An adaptive vector quantization algorithm is introduced that is inherently suitable for simple hardware implementation because it has a simple architecture. It allows fast encoding and decoding because it requires only addition and subtraction operations.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding
Xiao, Rui; Gao, Junbin; Bossomaier, Terry
2016-01-01
A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102
NASA Technical Reports Server (NTRS)
Colquitt, Walter
1989-01-01
The main objective is to improve the performance of a specific FORTRAN computer code from the Planetary Sciences Division of NASA/Johnson Space Center when used on a modern vectorizing supercomputer. The code is used to calculate orbits of dust grains that separate from comets and asteroids. This code accounts for influences of the sun and 8 planets (neglecting Pluto), solar wind, and solar light pressure including Poynting-Robertson drag. Calculations allow one to study the motion of these particles as they are influenced by the Earth or one of the other planets. Some of these particles become trapped just beyond the Earth for long periods of time. These integer period resonances vary from 3 orbits of the Earth and 2 orbits of the particles to as high as 14 to 13.
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.
Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao
2018-02-01
Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.
MHD Turbulence, div B = 0 and Lattice Boltzmann Simulations
NASA Astrophysics Data System (ADS)
Phillips, Nate; Keating, Brian; Vahala, George; Vahala, Linda
2006-10-01
The question of div B = 0 in MHD simulations is a crucial issue. Here we consider lattice Boltzmann simulations for MHD (LB-MHD). One introduces a scalar distribution function for the velocity field and a vector distribution function for the magnetic field. This asymmetry is due to the different symmetries in the tensors arising in the time evolution of these fields. The simple algorithm of streaming and local collisional relaxation is ideally parallelized and vectorized -- leading to the best sustained performance/PE of any code run on the Earth Simulator. By reformulating the BGK collision term, a simple implicit algorithm can be immediately transformed into an explicit algorithm that permits simulations at quite low viscosity and resistivity. However the div B is not an imposed constraint. Currently we are examining a new formulations of LB-MHD that impose the div B constraint -- either through an entropic like formulation or by introducing forcing terms into the momentum equations and permitting simpler forms of relaxation distributions.
Lim, Hyoun-Sub; Vaira, Anna Maria; Domier, Leslie L; Lee, Sung Chul; Kim, Hong Gi; Hammond, John
2010-06-20
We have developed plant virus-based vectors for virus-induced gene silencing (VIGS) and protein expression, based on Alternanthera mosaic virus (AltMV), for infection of a wide range of host plants including Nicotiana benthamiana and Arabidopsis thaliana by either mechanical inoculation of in vitro transcripts or via agroinfiltration. In vivo transcripts produced by co-agroinfiltration of bacteriophage T7 RNA polymerase resulted in T7-driven AltMV infection from a binary vector in the absence of the Cauliflower mosaic virus 35S promoter. An artificial bipartite viral vector delivery system was created by separating the AltMV RNA-dependent RNA polymerase and Triple Gene Block (TGB)123-Coat protein (CP) coding regions into two constructs each bearing the AltMV 5' and 3' non-coding regions, which recombined in planta to generate a full-length AltMV genome. Substitution of TGB1 L(88)P, and equivalent changes in other potexvirus TGB1 proteins, affected RNA silencing suppression efficacy and suitability of the vectors from protein expression to VIGS. Published by Elsevier Inc.
Pulse Code Modulation (PCM) encoder handbook for Aydin Vector MMP-600 series system
NASA Technical Reports Server (NTRS)
Currier, S. F.; Powell, W. R.
1986-01-01
The hardware and software characteristics of a time division multiplex system are described. The system is used to sample analog and digital data. The data is merged with synchronization information to produce a serial pulse coded modulation (PCM) bit stream. Information presented herein is required by users to design compatible interfaces and assure effective utilization of this encoder system. GSFC/Wallops Flight Facility has flown approximately 50 of these systems through 1984 on sounding rockets with no inflight failures. Aydin Vector manufactures all of the components for these systems.
2014-09-30
portability is difficult to achieve on future supercomputers that use various type of accelerators (GPUs, Xeon - Phi , and SIMD etc). All of these...bottlenecks of NUMA. For example, in the CG code the state vector was originally stored as q(1 : Nvar ,1 : Npoin) where Nvar are the number of...a Global Grid Point (GGP) storage. On the other hand, in the DG code the state vector is typically stored as q(1 : Nvar ,1 : Npts,1 : Nelem) where
Magnetic resonance image compression using scalar-vector quantization
NASA Astrophysics Data System (ADS)
Mohsenian, Nader; Shahri, Homayoun
1995-12-01
A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Gravitational and Magnetic Anomaly Inversion Using a Tree-Based Geometry Representation
2009-06-01
find successive mini- ized vectors. Throughout this paper, the term iteration refers to a ingle loop through a stage of the global scheme, not...BOX 12211 RESEARCH TRIANGLE PARK NC 27709-2211 5 NAVAL RESEARCH LAB E R FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.; Matarrese, Raffaella; Klemm, Frank J., Jr.
2006-09-01
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm.
Kotchenova, Svetlana Y; Vermote, Eric F; Matarrese, Raffaella; Klemm, Frank J
2006-09-10
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm.
NASA Technical Reports Server (NTRS)
Rarig, P. L.
1980-01-01
A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.
Transformable Rhodobacter strains, method for producing transformable Rhodobacter strains
Laible, Philip D.; Hanson, Deborah K.
2018-05-08
The invention provides an organism for expressing foreign DNA, the organism engineered to accept standard DNA carriers. The genome of the organism codes for intracytoplasmic membranes and features an interruption in at least one of the genes coding for restriction enzymes. Further provided is a system for producing biological materials comprising: selecting a vehicle to carry DNA which codes for the biological materials; determining sites on the vehicle's DNA sequence susceptible to restriction enzyme cleavage; choosing an organism to accept the vehicle based on that organism not acting upon at least one of said vehicle's sites; engineering said vehicle to contain said DNA; thereby creating a synthetic vector; and causing the synthetic vector to enter the organism so as cause expression of said DNA.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Fusion PIC code performance analysis on the Cori KNL system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, Tuomas S.; Deslippe, Jack; Friesen, Brian
We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization ismore » shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.« less
Vectorized schemes for conical potential flow using the artificial density method
NASA Technical Reports Server (NTRS)
Bradley, P. F.; Dwoyer, D. L.; South, J. C., Jr.; Keen, J. M.
1984-01-01
A method is developed to determine solutions to the full-potential equation for steady supersonic conical flow using the artificial density method. Various update schemes used generally for transonic potential solutions are investigated. The schemes are compared for speed and robustness. All versions of the computer code have been vectorized and are currently running on the CYBER-203 computer. The update schemes are vectorized, where possible, either fully (explicit schemes) or partially (implicit schemes). Since each version of the code differs only by the update scheme and elements other than the update scheme are completely vectorizable, comparisons of computational effort and convergence rate among schemes are a measure of the specific scheme's performance. Results are presented for circular and elliptical cones at angle of attack for subcritical and supercritical crossflows.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Deep Hashing for Scalable Image Search.
Lu, Jiwen; Liong, Venice Erin; Zhou, Jie
2017-05-01
In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.
Kotchenova, Svetlana Y; Vermote, Eric F
2007-07-10
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.
2007-07-01
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1990-01-01
Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.
Drop-out phagemid vector for switching from phage displayed affinity reagents to expression formats.
Pershad, Kritika; Sullivan, Mark A; Kay, Brian K
2011-05-15
Affinity reagents that are generated by phage display are typically subcloned into an expression vector for further biochemical characterization. This insert transfer process is time consuming and laborious especially if many inserts are to be subcloned. To simplify the transfer process, we have constructed a "drop-out" phagemid vector that can be rapidly converted to an expression vector by a simple restriction enzyme digestion with MfeI (to "drop-out" the gene III coding sequence), which generates alkaline phosphatase (AP) fusions of the affinity reagents on religation. Subsequently, restriction digestion with AscI drops out the AP coding region and religation generates affinity reagents with a C-terminal six-histidine tag. To validate the usefulness of this vector, four different human single chain Fragments of variable regions (scFv) were tested, three of which show specific binding to three zebrafish (Danio rerio) proteins, namely suppression of tumorigenicity 13, recoverin, and Ppib and the fourth binds to human Lactoferrin protein. For each of the constructs tested, the gene III and AP drop-out efficiency was between 90% and 100%. This vector is especially useful in speeding up the downstream screening of affinity reagents and bypassing the time-consuming subcloning experiments. Copyright © 2011 Elsevier Inc. All rights reserved.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
AAV-mediated RLBP1 gene therapy improves the rate of dark adaptation in Rlbp1 knockout mice
Choi, Vivian W; Bigelow, Chad E; McGee, Terri L; Gujar, Akshata N; Li, Hui; Hanks, Shawn M; Vrouvlianis, Joanna; Maker, Michael; Leehy, Barrett; Zhang, Yiqin; Aranda, Jorge; Bounoutas, George; Demirs, John T; Yang, Junzheng; Ornberg, Richard; Wang, Yu; Martin, Wendy; Stout, Kelly R; Argentieri, Gregory; Grosenstein, Paul; Diaz, Danielle; Turner, Oliver; Jaffee, Bruce D; Police, Seshidhar R; Dryja, Thaddeus P
2015-01-01
Recessive mutations in RLBP1 cause a form of retinitis pigmentosa in which the retina, before its degeneration leads to blindness, abnormally slowly recovers sensitivity after exposure to light. To develop a potential gene therapy for this condition, we tested multiple recombinant adeno-associated vectors (rAAVs) composed of different promoters, capsid serotypes, and genome conformations. We generated rAAVs in which sequences from the promoters of the human RLBP1, RPE65, or BEST1 genes drove the expression of a reporter gene (green fluorescent protein). A promoter derived from the RLBP1 gene mediated expression in the retinal pigment epithelium and Müller cells (the intended target cell types) at qualitatively higher levels than in other retinal cell types in wild-type mice and monkeys. With this promoter upstream of the coding sequence of the human RLBP1 gene, we compared the potencies of vectors with an AAV2 versus an AAV8 capsid in transducing mouse retinas, and we compared vectors with a self-complementary versus a single-stranded genome. The optimal vector (scAAV8-pRLBP1-hRLBP1) had serotype 8 capsid and a self-complementary genome. Subretinal injection of scAAV8-pRLBP1-hRLBP1 in Rlbp1 nullizygous mice improved the rate of dark adaptation based on scotopic (rod-plus-cone) and photopic (cone) electroretinograms (ERGs). The effect was still present after 1 year. PMID:26199951
Piechaczek, C; Fetzer, C; Baiker, A; Bode, J; Lipps, H J
1999-01-01
We have developed an episomal replicating expression vector in which the SV40 gene coding for the large T-antigen was replaced by chromosomal scaffold/matrix attached regions. Southern analysis as well as vector rescue experiments in CHO cells and in Escherichia coli demonstrate that the vector replicates episomally in CHO cells. It occurs in a very low copy number in the cells and is stably maintained over more than 100 generations without selection pressure. PMID:9862961
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
User's and test case manual for FEMATS
NASA Technical Reports Server (NTRS)
Chatterjee, Arindam; Volakis, John; Nurnberger, Mike; Natzke, John
1995-01-01
The FEMATS program incorporates first-order edge-based finite elements and vector absorbing boundary conditions into the scattered field formulation for computation of the scattering from three-dimensional geometries. The code has been validated extensively for a large class of geometries containing inhomogeneities and satisfying transition conditions. For geometries that are too large for the workstation environment, the FEMATS code has been optimized to run on various supercomputers. Currently, FEMATS has been configured to run on the HP 9000 workstation, vectorized for the Cray Y-MP, and parallelized to run on the Kendall Square Research (KSR) architecture and the Intel Paragon.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
New Syndrome Decoding Techniques for the (n, K) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
This paper presents a new syndrome decoding algorithm for the (n,k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3,1)CC.
Simplified Syndrome Decoding of (n, 1) Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1983-01-01
A new syndrome decoding algorithm for the (n, 1) convolutional codes (CC) that is different and simpler than the previous syndrome decoding algorithm of Schalkwijk and Vinck is presented. The new algorithm uses the general solution of the polynomial linear Diophantine equation for the error polynomial vector E(D). This set of Diophantine solutions is a coset of the CC space. A recursive or Viterbi-like algorithm is developed to find the minimum weight error vector cirumflex E(D) in this error coset. An example illustrating the new decoding algorithm is given for the binary nonsymmetric (2,1)CC.
Determination of coronal magnetic fields from vector magnetograms
NASA Technical Reports Server (NTRS)
Mikic, Zoran
1992-01-01
The determination of coronal magnetic fields from vector magnetograms, including the development and application of algorithms to determine force-free coronal fields above selected observations of active regions is studied. Two additional active regions were selected and analyzed. The restriction of periodicity in the 3-D code which is used to determine the coronal field was removed giving the new code variable mesh spacing and is thus able to provide a more realistic description of coronal fields. The NOAA active region AR5747 of 20 Oct. 1989 was studied. A brief account of progress during the research performed is reported.
Bounded-Angle Iterative Decoding of LDPC Codes
NASA Technical Reports Server (NTRS)
Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2009-01-01
Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
Vector quantizer based on brightness maps for image compression with the polynomial transform
NASA Astrophysics Data System (ADS)
Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.
2002-11-01
We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be a unique tool to structurally classify the image block within a given lattice. This particular operation intends to be one of the main contributions of this work. The fifth section will fuse the proposals derived from the study of the three main topics- addressed in the last sections- in order to propose an image compression model that takes advantage of vector quantizers inside the brightness transformed domain to determine the most important structures, finding the energy distribution inside the Hermite domain. Sixth and last section will show some results obtained while testing the coding-decoding model. The guidelines to evaluate the image compressing performance were the compression ratio, SNR and psycho-visual quality. Some conclusions derived from the research and possible unexplored paths will be shown on this section as well.
Achieving High Performance on the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1998-01-01
The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.
Ideal form of optical plasma lenses
NASA Astrophysics Data System (ADS)
Gordon, D. F.; Stamm, A. B.; Hafizi, B.; Johnson, L. A.; Kaganovich, D.; Hubbard, R. F.; Richardson, A. S.; Zhigunov, D.
2018-06-01
The canonical form of an optical plasma lens is a parabolic density channel. This form suffers from spherical aberrations, among others. Spherical aberration is partially corrected by adding a quartic term to the radial density profile. Ideal forms which lead to perfect focusing or imaging are obtained. The fields at the focus of a strong lens are computed with high accuracy and efficiency using a combination of eikonal and full Maxwell descriptions of the radiation propagation. The calculations are performed using a new computer propagation code, SeaRay, which is designed to transition between various solution methods as the beam propagates through different spatial regions. The calculations produce the full Maxwell vector fields in the focal region.
Nakamura, Mikiko; Suzuki, Ayako; Akada, Junko; Tomiyoshi, Keisuke; Hoshida, Hisashi; Akada, Rinji
2015-12-01
Mammalian gene expression constructs are generally prepared in a plasmid vector, in which a promoter and terminator are located upstream and downstream of a protein-coding sequence, respectively. In this study, we found that front terminator constructs-DNA constructs containing a terminator upstream of a promoter rather than downstream of a coding region-could sufficiently express proteins as a result of end joining of the introduced DNA fragment. By taking advantage of front terminator constructs, FLAG substitutions, and deletions were generated using mutagenesis primers to identify amino acids specifically recognized by commercial FLAG antibodies. A minimal epitope sequence for polyclonal FLAG antibody recognition was also identified. In addition, we analyzed the sequence of a C-terminal Ser-Lys-Leu peroxisome localization signal, and identified the key residues necessary for peroxisome targeting. Moreover, front terminator constructs of hepatitis B surface antigen were used for deletion analysis, leading to the identification of regions required for the particle formation. Collectively, these results indicate that front terminator constructs allow for easy manipulations of C-terminal protein-coding sequences, and suggest that direct gene expression with PCR-amplified DNA is useful for high-throughput protein analysis in mammalian cells.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
An interactive computer code for simulation of a high-intensity turbulent combustor as a single point inhomogeneous stirred reactor was developed from an existing batch processing computer code CDPSR. The interactive CDPSR code was used as a guide for interpretation and direction of DOE-sponsored companion experiments utilizing Xenon tracer with optical laser diagnostic techniques to experimentally determine the appropriate mixing frequency, and for validation of CDPSR as a mixing-chemistry model for a laboratory jet-stirred reactor. The coalescence-dispersion model for finite rate mixing was incorporated into an existing interactive code AVCO-MARK I, to enable simulation of a combustor as a modular array of stirred flow and plug flow elements, each having a prescribed finite mixing frequency, or axial distribution of mixing frequency, as appropriate. Further increase the speed and reliability of the batch kinetics integrator code CREKID was increased by rewriting in vectorized form for execution on a vector or parallel processor, and by incorporating numerical techniques which enhance execution speed by permitting specification of a very low accuracy tolerance.
Vector-based navigation using grid-like representations in artificial agents.
Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan
2018-05-01
Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
COMPUTATION OF GLOBAL PHOTOCHEMISTRY WITH SMVGEAR II (R823186)
A computer model was developed to simulate global gas-phase photochemistry. The model solves chemical equations with SMVGEAR II, a sparse-matrix, vectorized Gear-type code. To obtain SMVGEAR II, the original SMVGEAR code was modified to allow computation of different sets of chem...
Developement of an Optimum Interpolation Analysis Method for the CYBER 205
NASA Technical Reports Server (NTRS)
Nestler, M. S.; Woollen, J.; Brin, Y.
1985-01-01
A state-of-the-art technique to assimilate the diverse observational database obtained during FGGE, and thus create initial conditions for numerical forecasts is described. The GLA optimum interpolation (OI) analysis method analyzes pressure, winds, and temperature at sea level, mixing ratio at six mandatory pressure levels up to 300 mb, and heights and winds at twelve levels up to 50 mb. Conversion to the CYBER 205 required a major re-write of the Amdahl OI code to take advantage of the CYBER vector processing capabilities. Structured programming methods were used to write the programs and this has resulted in a modular, understandable code. Among the contributors to the increased speed of the CYBER code are a vectorized covariance-calculation routine, an extremely fast matrix equation solver, and an innovative data search and sort technique.
Evaluation of Cache-based Superscalar and Cacheless Vector Architectures for Scientific Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane; Biswas, Rupak; Djomehri, Jahed; VanderWijngaart, Rob
2003-01-01
The growing gap between sustained and peak performance for scientific applications has become a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to bridge this gap for a significant number of computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines a full spectrum of low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks using some simple optimizations. Finally, we evaluate the perfor- mance of several numerical codes from key scientific computing domains. Overall results demonstrate that the SX6 achieves high performance on a large fraction of our application suite and in many cases significantly outperforms the RISC-based architectures. However, certain classes of applications are not easily amenable to vectorization and would likely require extensive reengineering of both algorithm and implementation to utilize the SX6 effectively.
Clément, Nathalie; Avalosse, Bernard; El Bakkouri, Karim; Velu, Thierry; Brandenburger, Annick
2001-01-01
The production of wild-type-free stocks of recombinant parvovirus minute virus of mice [MVM(p)] is difficult due to the presence of homologous sequences in vector and helper genomes that cannot easily be eliminated from the overlapping coding sequences. We have therefore cloned and sequenced spontaneously occurring defective particles of MVM(p) with very small genomes to identify the minimal cis-acting sequences required for DNA amplification and virus production. One of them has lost all capsid-coding sequences but is still able to replicate in permissive cells when nonstructural proteins are provided in trans by a helper plasmid. Vectors derived from this particle produce stocks with no detectable wild-type MVM after cotransfection with new, matched, helper plasmids that present no homology downstream from the transgene. PMID:11152501
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Spatial coding-based approach for partitioning big spatial data in Hadoop
NASA Astrophysics Data System (ADS)
Yao, Xiaochuang; Mokbel, Mohamed F.; Alarabi, Louai; Eldawy, Ahmed; Yang, Jianyu; Yun, Wenju; Li, Lin; Ye, Sijing; Zhu, Dehai
2017-09-01
Spatial data partitioning (SDP) plays a powerful role in distributed storage and parallel computing for spatial data. However, due to skew distribution of spatial data and varying volume of spatial vector objects, it leads to a significant challenge to ensure both optimal performance of spatial operation and data balance in the cluster. To tackle this problem, we proposed a spatial coding-based approach for partitioning big spatial data in Hadoop. This approach, firstly, compressed the whole big spatial data based on spatial coding matrix to create a sensing information set (SIS), including spatial code, size, count and other information. SIS was then employed to build spatial partitioning matrix, which was used to spilt all spatial objects into different partitions in the cluster finally. Based on our approach, the neighbouring spatial objects can be partitioned into the same block. At the same time, it also can minimize the data skew in Hadoop distributed file system (HDFS). The presented approach with a case study in this paper is compared against random sampling based partitioning, with three measurement standards, namely, the spatial index quality, data skew in HDFS, and range query performance. The experimental results show that our method based on spatial coding technique can improve the query performance of big spatial data, as well as the data balance in HDFS. We implemented and deployed this approach in Hadoop, and it is also able to support efficiently any other distributed big spatial data systems.
The poultry red mite Dermanyssus gallinae as a potential carrier of vector-borne diseases.
De Luna, Carlos J; Arkle, Samuel; Harrington, David; George, David R; Guy, Jonathan H; Sparagano, Olivier A E
2008-12-01
The poultry red mite Dermanyssus gallinae is an obligatory blood-sucking parasite that is considered to be one of the most important ectoparasites in the poultry industry, mainly because it is responsible for important economic losses, leads to a reduction of welfare of laying hens, and may pose a disease risk to humans. As a result of these problems, much of the current research on this parasite targets new methods of control. Less attention has been paid to the importance of D. gallinae as a carrier of vector-borne diseases. Some authors have mentioned the possible involvement of D. gallinae in the transmission (both in vitro and directly isolated from the mites) of viral and bacterial agents. Our research group has demonstrated the presence of Mycobacterium spp. within D. gallinae. DNA coding for Mycobacterium spp. was successfully amplified from unfed adult D. gallinae, larvae, and eggs by using reverse transcription-polymerase chain reaction targeting the 16S rRNA gene. The results have suggested the possible transovarial and transstadial transmission of pathogens by D. gallinae.
Helper-dependent adenoviral vectors for liver-directed gene therapy
Brunetti-Pierri, Nicola; Ng, Philip
2011-01-01
Helper-dependent adenoviral (HDAd) vectors devoid of all viral-coding sequences are promising non-integrating vectors for liver-directed gene therapy because they have a large cloning capacity, can efficiently transduce a wide variety of cell types from various species independent of the cell cycle and can result in long-term transgene expression without chronic toxicity. The main obstacle preventing clinical applications of HDAd for liver-directed gene therapy is the host innate inflammatory response against the vector capsid proteins that occurs shortly after intravascular vector administration resulting in acute toxicity, the severity of which is dependent on vector dose. Intense efforts have been focused on elucidating the factors involved in this acute response and various strategies have been investigated to improve the therapeutic index of HDAd vectors. These strategies have yielded encouraging results with the potential for clinical translation. PMID:21470977
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
New syndrome decoding techniques for the (n, k) convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
This paper presents a new syndrome decoding algorithm for the (n, k) convolutional codes (CC) which differs completely from an earlier syndrome decoding algorithm of Schalkwijk and Vinck. The new algorithm is based on the general solution of the syndrome equation, a linear Diophantine equation for the error polynomial vector E(D). The set of Diophantine solutions is a coset of the CC. In this error coset a recursive, Viterbi-like algorithm is developed to find the minimum weight error vector (circumflex)E(D). An example, illustrating the new decoding algorithm, is given for the binary nonsystemmatic (3, 1)CC. Previously announced in STAR as N83-34964
NASA Astrophysics Data System (ADS)
Mills, R. T.
2014-12-01
As the high performance computing (HPC) community pushes towards the exascale horizon, the importance and prevalence of fine-grained parallelism in new computer architectures is increasing. This is perhaps most apparent in the proliferation of so-called "accelerators" such as the Intel Xeon Phi or NVIDIA GPGPUs, but the trend also holds for CPUs, where serial performance has grown slowly and effective use of hardware threads and vector units are becoming increasingly important to realizing high performance. This has significant implications for weather, climate, and Earth system modeling codes, many of which display impressive scalability across MPI ranks but take relatively little advantage of threading and vector processing. In addition to increasing parallelism, next generation codes will also need to address increasingly deep hierarchies for data movement: NUMA/cache levels, on node vs. off node, local vs. wide neighborhoods on the interconnect, and even in the I/O system. We will discuss some approaches (grounded in experiences with the Intel Xeon Phi architecture) for restructuring Earth science codes to maximize concurrency across multiple levels (vectors, threads, MPI ranks), and also discuss some novel approaches for minimizing expensive data movement/communication.
NASA Astrophysics Data System (ADS)
Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo
2016-09-01
Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.
Multitasking the INS3D-LU code on the Cray Y-MP
NASA Technical Reports Server (NTRS)
Fatoohi, Rod; Yoon, Seokkwan
1991-01-01
This paper presents the results of multitasking the INS3D-LU code on eight processors. The code is a full Navier-Stokes solver for incompressible fluid in three dimensional generalized coordinates using a lower-upper symmetric-Gauss-Seidel implicit scheme. This code has been fully vectorized on oblique planes of sweep and parallelized using autotasking with some directives and minor modifications. The timing results for five grid sizes are presented and analyzed. The code has achieved a processing rate of over one Gflops.
Bowen, J K; Templeton, M D; Sharrock, K R; Crowhurst, R N; Rikkerink, E H
1995-01-20
The feasibility of performing routine transformation-mediated mutagenesis in Glomerella cingulata was analysed by adopting three one-step gene disruption strategies targeted at the pectin lyase gene pnlA. The efficiencies of disruption following transformation with gene replacement- or gene truncation-disruption vectors were compared. To effect replacement-disruption, G. cingulata was transformed with a vector carrying DNA from the pnlA locus in which the majority of the coding sequence had been replaced by the gene for hygromycin B resistance. Two of the five transformants investigated contained an inactivated pnlA gene (pnlA-); both also contained ectopically integrated vector sequences. The efficacy of gene disruption by transformation with two gene truncation-disruption vectors was also assessed. Both vectors carried at 5' and 3' truncated copy of the pnlA coding sequence, adjacent to the gene for hygromycin B resistance. The promoter sequences controlling the selectable marker differed in the two vectors. In one vector the homologous G. cingulata gpdA promoter controlled hygromycin B phosphotransferase expression (homologous truncation vector), whereas in the second vector promoter elements were from the Aspergillus nidulans gpdA gene (heterologous truncation vector). Following transformation with the homologous truncation vector, nine transformants were analysed by Southern hybridisation; no transformants contained a disrupted pnlA gene. Of nineteen heterologous truncation vector transformants, three contained a disrupted pnlA gene; Southern analysis revealed single integrations of vector sequence at pnlA in two of these transformants. pnlA mRNA was not detected by Northern hybridisation in pnlA- transformants. pnlA- transformants failed to produce a PNLA protein with a pI identical to one normally detected in wild-type isolates by silver and activity staining of isoelectric focussing gels. Pathogenesis on Capsicum and apple was unaffected by disruption of the pnlA gene, indicating that the corresponding gene product, PNLA, is not essential for pathogenicity. Gene disruption is a feasible method for selectively mutating defined loci in G. cingulata for functional analysis of the corresponding gene products.
Helper-Dependent Adenoviral Vectors.
Rosewell, Amanda; Vetrini, Francesco; Ng, Philip
2011-10-29
Helper-dependent adenoviral vectors are devoid of all viral coding sequences, possess a large cloning capacity, and can efficiently transduce a wide variety of cell types from various species independent of the cell cycle to mediate long-term transgene expression without chronic toxicity. These non-integrating vectors hold tremendous potential for a variety of gene transfer and gene therapy applications. Here, we review the production technologies, applications, obstacles to clinical translation and their potential resolutions, and the future challenges and unanswered questions regarding this promising gene transfer technology.
Helper-Dependent Adenoviral Vectors
Rosewell, Amanda; Vetrini, Francesco; Ng, Philip
2012-01-01
Helper-dependent adenoviral vectors are devoid of all viral coding sequences, possess a large cloning capacity, and can efficiently transduce a wide variety of cell types from various species independent of the cell cycle to mediate long-term transgene expression without chronic toxicity. These non-integrating vectors hold tremendous potential for a variety of gene transfer and gene therapy applications. Here, we review the production technologies, applications, obstacles to clinical translation and their potential resolutions, and the future challenges and unanswered questions regarding this promising gene transfer technology. PMID:24533227
1991-09-01
12b. DISTRIBUTION CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (Maximum 200 words) Vector spherical harmonic expansions are...electric and magnetic field vectors from E rand B - r alone. Genural expressions are given relating the scattered field expansion coefficients to the source...Prescnbed by ANSI Std. Z39-18 29W-102 NCSC TR 426-90 CONTENTS Pag o INTRODUCTION 1 BACKGROUND 1 ANGULAR MOMENTUM OPERATOR AND VECTOR SPHERICAL
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Multipath search coding of stationary signals with applications to speech
NASA Astrophysics Data System (ADS)
Fehn, H. G.; Noll, P.
1982-04-01
This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.
An emergence of coordinated communication in populations of agents.
Kvasnicka, V; Pospichal, J
1999-01-01
The purpose of this article is to demonstrate that coordinated communication spontaneously emerges in a population composed of agents that are capable of specific cognitive activities. Internal states of agents are characterized by meaning vectors. Simple neural networks composed of one layer of hidden neurons perform cognitive activities of agents. An elementary communication act consists of the following: (a) two agents are selected, where one of them is declared the speaker and the other the listener; (b) the speaker codes a selected meaning vector onto a sequence of symbols and sends it to the listener as a message; and finally, (c) the listener decodes this message into a meaning vector and adapts his or her neural network such that the differences between speaker and listener meaning vectors are decreased. A Darwinian evolution enlarged by ideas from the Baldwin effect and Dawkins' memes is simulated by a simple version of an evolutionary algorithm without crossover. The agent fitness is determined by success of the mutual pairwise communications. It is demonstrated that agents in the course of evolution gradually do a better job of decoding received messages (they are closer to meaning vectors of speakers) and all agents gradually start to use the same vocabulary for the common communication. Moreover, if agent meaning vectors contain regularities, then these regularities are manifested also in messages created by agent speakers, that is, similar parts of meaning vectors are coded by similar symbol substrings. This observation is considered a manifestation of the emergence of a grammar system in the common coordinated communication.
ERIC Educational Resources Information Center
Farag, Mark
2007-01-01
Hill ciphers are linear codes that use as input a "plaintext" vector [p-right arrow above] of size n, which is encrypted with an invertible n x n matrix E to produce a "ciphertext" vector [c-right arrow above] = E [middle dot] [p-right arrow above]. Informally, a near-field is a triple [left angle bracket]N; +, *[right angle bracket] that…
NASA Astrophysics Data System (ADS)
Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi
This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.
The Design of a Templated C++ Small Vector Class for Numerical Computing
NASA Technical Reports Server (NTRS)
Moran, Patrick J.
2000-01-01
We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET
NASA Astrophysics Data System (ADS)
Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.
2016-05-01
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.
Mori, Keisuke; Ando, Akira; Gehlbach, Peter; Nesbitt, David; Takahashi, Kyoichi; Goldsteen, Donna; Penn, Michael; Chen, Cheauyan T.; Mori, Keiko; Melia, Michele; Phipps, Sandrina; Moffat, Diana; Brazzell, Kim; Liau, Gene; Dixon, Katharine H.; Campochiaro, Peter A.
2001-01-01
Endostatin is a cleavage product of collagen XVIII that inhibits tumor angiogenesis and growth. Interferon α2a blocks tumor angiogenesis and causes regression of hemangiomas, but has no effect on choroidal neovascularization (CNV). Therefore, inhibitors of tumor angiogenesis do not necessarily inhibit ocular neovascularization. In this study, we used an intravenous injection of adenoviral vectors containing a sig-mEndo transgene consisting of murine immunoglobulin κ-chain leader sequence coupled to sequence coding for murine endostatin to investigate the effect of high serum levels of endostatin on CNV in mice. Mice injected with a construct in which sig-mEndo expression was driven by the Rous sarcoma virus promoter had moderately high serum levels of endostatin and significantly smaller CNV lesions at sites of laser-induced rupture of Bruch’s membrane than mice injected with null vector. Mice injected with a construct in which sig-mEndo was driven by the simian cytomegalovirus promoter had ∼10-fold higher endostatin serum levels and had nearly complete prevention of CNV. There was a strong inverse correlation between endostatin serum level and area of CNV. This study provides proof of principle that gene therapy to increase levels of endostatin can prevent the development of CNV and may provide a new treatment for the leading cause of severe loss of vision in patients with age-related macular degeneration. PMID:11438478
The salivary gland transcriptome of the eastern tree hole mosquito, Ochlerotatus triseriatus.
Calvo, Eric; Sanchez-Vargas, Irma; Kotsyfakis, Michalis; Favreau, Amanda J; Barbian, Kent D; Pham, Van M; Olson, Kenneth E; Ribeiro, José M C
2010-05-01
Saliva of blood-sucking arthropods contains a complex mixture of peptides that affect their host's hemostasis, inflammation, and immunity. These activities can also modify the site of pathogen delivery and increase disease transmission. Saliva also induces hosts to mount an antisaliva immune response that can lead to skin allergies or even anaphylaxis. Accordingly, knowledge of the salivary repertoire, or sialome, of a mosquito is useful to provide a knowledge platform to mine for novel pharmacological activities, to develop novel vaccine targets for vector-borne diseases, and to develop epidemiological markers of vector exposure and candidate desensitization vaccines. The mosquito Ochlerotatus triseriatus is a vector of La Crosse virus and produces allergy in humans. In this work, a total of 1,575 clones randomly selected from an adult female O. triseriatus salivary gland cDNA library was sequenced and used to assemble a database that yielded 731 clusters of related sequences, 560 of which were singletons. Primer extension experiments were performed in selected clones to further extend sequence coverage, allowing for the identification of 159 protein sequences, 66 of which code for putative secreted proteins. Supplemental spreadsheets containing these data are available at http://exon.niaid.nih.gov/transcriptome/Ochlerotatus_triseriatus/S1/Ot-S1.xls and http://exon.niaid. nih.gov/transcriptome/Ochlerotatus_triseriatus/S2/Ot-S2.xls.
NASA Astrophysics Data System (ADS)
Zhai, Peng-Wang; Hu, Yongxiang; Josset, Damien B.; Trepte, Charles R.; Lucker, Patricia L.; Lin, Bing
2012-06-01
We have developed a Vector Radiative Transfer (VRT) code for coupled atmosphere and ocean systems based on the successive order of scattering (SOS) method. In order to achieve efficiency and maintain accuracy, the scattering matrix is expanded in terms of the Wigner d functions and the delta fit or delta-M technique is used to truncate the commonly-present large forward scattering peak. To further improve the accuracy of the SOS code, we have implemented the analytical first order scattering treatment using the exact scattering matrix of the medium in the SOS code. The expansion and truncation techniques are kept for higher order scattering. The exact first order scattering correction was originally published by Nakajima and Takana.1 A new contribution of this work is to account for the exact secondary light scattering caused by the light reflected by and transmitted through the rough air-sea interface.
First experience of vectorizing electromagnetic physics models for detector simulation
NASA Astrophysics Data System (ADS)
Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.
2015-12-01
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.
1992-02-01
Division (Code RM) ONERA Office of Aeronautics & Space Technology 29 ave de la Division Leclerc NASA Hq 92320 Chfitillon Washington DC 20546 France United...Vector of thickness variables. V’ = [ t2 ........ tN Vector of thickness changes. AV ’= [rt, 5t2 ......... tNJ TI 7-9 Vector of strain derivatives. F...ds, ds, I d, 1i’,= dt, dr2 ........ dt--N Vector of buckling derivatives. dX d). , dt1 dt2 dtN Then 5F= Vs’i . AV and SX V,’. AV The linearised
Progressive low-bitrate digital color/monochrome image coding by neuro-fuzzy clustering
NASA Astrophysics Data System (ADS)
Mitra, Sunanda; Meadows, Steven
1997-10-01
Color image coding at low bit rates is an area of research that is just being addressed in recent literature since the problems of storage and transmission of color images are becoming more prominent in many applications. Current trends in image coding exploit the advantage of subband/wavelet decompositions in reducing the complexity in optimal scalar/vector quantizer (SQ/VQ) design. Compression ratios (CRs) of the order of 10:1 to 20:1 with high visual quality have been achieved by using vector quantization of subband decomposed color images in perceptually weighted color spaces. We report the performance of a recently developed adaptive vector quantizer, namely, AFLC-VQ for effective reduction in bit rates while maintaining high visual quality of reconstructed color as well as monochrome images. For 24 bit color images, excellent visual quality is maintained upto a bit rate reduction to approximately 0.48 bpp (for each color plane or monochrome 0.16 bpp, CR 50:1) by using the RGB color space. Further tuning of the AFLC-VQ, and addition of an entropy coder module after the VQ stage results in extremely low bit rates (CR 80:1) for good quality, reconstructed images. Our recent study also reveals that for similar visual quality, RGB color space requires less bits/pixel than either the YIQ, or HIS color space for storing the same information when entropy coding is applied. AFLC-VQ outperforms other standard VQ and adaptive SQ techniques in retaining visual fidelity at similar bit rate reduction.
Gritz, L; Davies, J
1983-11-01
The plasmid-borne gene hph coding for hygromycin B phosphotransferase (HPH) in Escherichia coli has been identified and its nucleotide sequence determined. The hph gene is 1026 nucleotides long, coding for a protein with a predicted Mr of 39 000. The hph gene was placed in a shuttle plasmid vector, downstream from the promoter region of the cyc 1 gene of Saccharomyces cerevisiae, and an hph construction containing a single AUG in the 5' noncoding region allowed direct selection following transformation in yeast and in E. coli. Thus the hph gene can be used in cloning vectors for both pro- and eukaryotes.
Lafuente, M J; Petit, T; Gancedo, C
1997-12-22
We have constructed a series of plasmids to facilitate the fusion of promoters with or without coding regions of genes of Schizosaccharomyces pombe to the lacZ gene of Escherichia coli. These vectors carry a multiple cloning region in which fission yeast DNA may be inserted in three different reading frames with respect to the coding region of lacZ. The plasmids were constructed with the ura4+ or the his3+ marker of S. pombe. Functionality of the plasmids was tested measuring in parallel the expression of fructose 1,6-bisphosphatase and beta-galactosidase under the control of the fbp1+ promoter in different conditions.
Security authentication using phase-encoded nanoparticle structures and polarized light.
Carnicer, Artur; Hassanfiroozi, Amir; Latorre-Carmona, Pedro; Huang, Yi-Pai; Javidi, Bahram
2015-01-15
Phase-encoded nanostructures such as quick response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase-encoded QR codes. The system is illuminated using polarized light, and the QR code is encoded using a phase-only random mask. Using classification algorithms, it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase-encoded QR codes using polarimetric signatures.
Universal Decoder for PPM of any Order
NASA Technical Reports Server (NTRS)
Moision, Bruce E.
2010-01-01
A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.
Chagas Disease, Migration and Community Settlement Patterns in Arequipa, Peru
Gilman, Robert H.; Cornejo del Carpio, Juan G.; Naquira, Cesar; Bern, Caryn; Levy, Michael Z.
2009-01-01
Background Chagas disease is one of the most important neglected tropical diseases in the Americas. Vectorborne transmission of Chagas disease has been historically rare in urban settings. However, in marginal communities near the city of Arequipa, Peru, urban transmission cycles have become established. We examined the history of migration and settlement patterns in these communities, and their connections to Chagas disease transmission. Methodology/Principal Findings This was a qualitative study that employed focus group discussions and in-depth interviews. Five focus groups and 50 in-depth interviews were carried out with 94 community members from three shantytowns and two traditional towns near Arequipa, Peru. Focus groups utilized participatory methodologies to explore the community's mobility patterns and the historical and current presence of triatomine vectors. In-depth interviews based on event history calendars explored participants' migration patterns and experience with Chagas disease and vectors. Focus group data were analyzed using participatory analysis methodologies, and interview data were coded and analyzed using a grounded theory approach. Entomologic data were provided by an ongoing vector control campaign. We found that migrants to shantytowns in Arequipa were unlikely to have brought triatomines to the city upon arrival. Frequent seasonal moves, however, took shantytown residents to valleys surrounding Arequipa where vectors are prevalent. In addition, the pattern of settlement of shantytowns and the practice of raising domestic animals by residents creates a favorable environment for vector proliferation and dispersal. Finally, we uncovered a phenomenon of population loss and replacement by low-income migrants in one traditional town, which created the human settlement pattern of a new shantytown within this traditional community. Conclusions/Significance The pattern of human migration is therefore an important underlying determinant of Chagas disease risk in and around Arequipa. Frequent seasonal migration by residents of peri-urban shantytowns provides a path of entry of vectors into these communities. Changing demographic dynamics of traditional towns are also leading to favorable conditions for Chagas disease transmission. Control programs must include surveillance for infestation in communities assumed to be free of vectors. PMID:20016830
2007-03-01
32 4.4 Algorithm Pseudo - Code ...................................................................................34 4.5 WIND Interface With a...difference estimates of xc temporal derivatives, or by using a polynomial fit to the previous values of xc. 34 4.4 ALGORITHM PSEUDO - CODE Pseudo ...Phase Shift Keying DQPSK Differential Quadrature Phase Shift Keying EVM Error Vector Magnitude FFT Fast Fourier Transform FPGA Field Programmable
Vector quantization for efficient coding of upper subbands
NASA Technical Reports Server (NTRS)
Zeng, W. J.; Huang, Y. F.
1994-01-01
This paper examines the application of vector quantization (VQ) to exploit both intra-band and inter-band redundancy in subband coding. The focus here is on the exploitation of inter-band dependency. It is shown that VQ is particularly suitable and effective for coding the upper subbands. Three subband decomposition-based VQ coding schemes are proposed here to exploit the inter-band dependency by making full use of the extra flexibility of VQ approach over scalar quantization. A quadtree-based variable rate VQ (VRVQ) scheme which takes full advantage of the intra-band and inter-band redundancy is first proposed. Then, a more easily implementable alternative based on an efficient block-based edge estimation technique is employed to overcome the implementational barriers of the first scheme. Finally, a predictive VQ scheme formulated in the context of finite state VQ is proposed to further exploit the dependency among different subbands. A VRVQ scheme proposed elsewhere is extended to provide an efficient bit allocation procedure. Simulation results show that these three hybrid techniques have advantages, in terms of peak signal-to-noise ratio (PSNR) and complexity, over other existing subband-VQ approaches.
USDA-ARS?s Scientific Manuscript database
Foot-and-mouth disease virus (FMDV) causes a highly contagious disease of cloven-hoofed animals. We have previously demonstrated that a replication-defective human adenovirus 5 vector carrying the FMDV capsid coding region of serotype A24 Cruzeiro (Ad5-CI-A24-2B) protects swine and cattle against FM...
USDA-ARS?s Scientific Manuscript database
Newcastle disease virus (NDV), avian paramyxovirus type 1, has been developed as a vector to express foreign genes for vaccine and gene therapy purposes. The foreign genes are usually inserted into a non-coding region of the NDV genome as an independent transcription unit (ITU), which potentially a...
Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, E.; et al.
We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihoodmore » $$\\Delta \\chi^2 \\le 0.045$$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$$~h^{-1}$$) and galaxy-galaxy lensing (12 Mpc$$~h^{-1}$$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data.« less
NASA Astrophysics Data System (ADS)
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects-15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing-168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects—15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing—168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Identification and Optimization of New Leads for Malaria Vector Control.
Hueter, Ottmar F; Hoppé, Mark; Wege, Philip; Maienfisch, Peter
2016-10-01
A significant proportion of the world's population remains at risk from malaria, and whilst great progress has been made in reducing the number of malaria cases globally through the use of vector control insecticides, these gains are under threat from the emergence of insecticide resistance. The spread of resistance in the vector populations, principally to pyrethroids, is driving the need for the development of new tools for malaria vector control. In order to identify new leads 30,000 compounds from the Syngenta corporate chemical collection were tested in a newly developed screening platform. More than 3000 compounds (10%) showed activity at ≤200 mg active ingredient (AI) litre -1 against Anopheles stephensi. Further evaluation resulted in the identification of 12 viable leads for the control of adult mosquitoes, most originating from current or former insecticide projects. Surprisingly, one of these leads emerged from a former PPO herbicide project and one from a former complex III fungicide project. This indicates that representatives of certain herbicide and fungicide projects and modes of action can also represent a valuable source of leads for malaria vector control. Optimization of the diphenyl ether lead 1 resulted in the identification of the cyano-pyridyl compound 31. This compound 31 exhibits good activity against mosquito species including rdl resistant Anopheles. It is only slightly weaker than permethrin and does not show relevant levels of cross-resistance to the organochlorine insecticide dieldrin.
Spin Resonances for Stored Deuteron Beams in COSY. Vector Polarization. Tracking with Spink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luccio,A.; Lehrach, A.
2008-04-01
Results of measurements of vector and tensor polarization of a deuteron beam in the storage ring COSY have been published by the SPIN{at}COSY collaboration. In this experiment a RF Dipole was used that produced spin flip. The strength of the RFD-induced depolarizing resonance was calculated from the amount of spin flipping and the results shown in the figures of the cited paper. In this note we present the simulation of the experimental data (vector polarization) with the spin tracking code Spink.
Space shuttle main engine numerical modeling code modifications and analysis
NASA Technical Reports Server (NTRS)
Ziebarth, John P.
1988-01-01
The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).
Evaluation Method for Service Branding Using Word-of-Mouth Data
NASA Astrophysics Data System (ADS)
Shirahada, Kunio; Kosaka, Michitaka
Development and spread of internet technology contributes service firms to obtaining the high capability of brand information transmission as well as relative customer feedback data collection. In this paper, we propose a new evaluation method for service branding using firms and consumers data on the internet. Based on service marketing 7Ps (Product, Price, Place, Promotion, People, Physical evidence, Process) which are the key viewpoints for branding, we develop a brand evaluation system including coding methods for Word-of-Mouth (WoM) and corporate introductory information on the internet to identify both customer's service value recognition vector and firm's service value proposition vector. Our system quantitatively clarify both customer's service value recognition of the firm and firm's strength in service value proposition, thereby analyzing service brand communication gaps between firm and consumers. We applied this system to Japanese Ryokan hotel industry. Using six ryokan-hotels' data on Jyaran-net and Rakuten travel, we made totally 983 codes from WoM information and analyzed their service brand value according to three price based categories. As a result, we found that the characteristics of customers' service value recognition vector differ according to the price categories. In addition, the system clarified that there is a firm that has a different service value proposition vector from customers' recognition vector. This helps to analyze corporate service brand strategy and has a significance as a system technology supporting service management.
NASA Technical Reports Server (NTRS)
Edwards, C. L. W.; Meissner, F. T.; Hall, J. B.
1979-01-01
Color computer graphics techniques were investigated as a means of rapidly scanning and interpreting large sets of transient heating data. The data presented were generated to support the conceptual design of a heat-sink thermal protection system (TPS) for a hypersonic research airplane. Color-coded vector and raster displays of the numerical geometry used in the heating calculations were employed to analyze skin thicknesses and surface temperatures of the heat-sink TPS under a variety of trajectory flight profiles. Both vector and raster displays proved to be effective means for rapidly identifying heat-sink mass concentrations, regions of high heating, and potentially adverse thermal gradients. The color-coded (raster) surface displays are a very efficient means for displaying surface-temperature and heating histories, and thereby the more stringent design requirements can quickly be identified. The related hardware and software developments required to implement both the vector and the raster displays for this application are also discussed.
A portable approach for PIC on emerging architectures
NASA Astrophysics Data System (ADS)
Decyk, Viktor
2016-03-01
A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.
Particle-in-Cell laser-plasma simulation on Xeon Phi coprocessors
NASA Astrophysics Data System (ADS)
Surmin, I. A.; Bastrakov, S. I.; Efimenko, E. S.; Gonoskov, A. A.; Korzhimanov, A. V.; Meyerov, I. B.
2016-05-01
This paper concerns the development of a high-performance implementation of the Particle-in-Cell method for plasma simulation on Intel Xeon Phi coprocessors. We discuss the suitability of the method for Xeon Phi architecture and present our experience in the porting and optimization of the existing parallel Particle-in-Cell code PICADOR. Direct porting without code modification gives performance on Xeon Phi close to that of an 8-core CPU on a benchmark problem with 50 particles per cell. We demonstrate step-by-step optimization techniques, such as improving data locality, enhancing parallelization efficiency and vectorization leading to an overall 4.2 × speedup on CPU and 7.5 × on Xeon Phi compared to the baseline version. The optimized version achieves 16.9 ns per particle update on an Intel Xeon E5-2660 CPU and 9.3 ns per particle update on an Intel Xeon Phi 5110P. For a real problem of laser ion acceleration in targets with surface grating, where a large number of macroparticles per cell is required, the speedup of Xeon Phi compared to CPU is 1.6 ×.
Ambulatory Monitoring of Congestive Heart Failure by Multiple Bioelectric Impedance Vectors
Khoury, Dirar S.; Naware, Mihir; Siou, Jeff; Blomqvist, Andreas; Mathuria, Nilesh S.; Wang, Jianwen; Shih, Hue-Teh; Nagueh, Sherif F.; Panescu, Dorin
2009-01-01
Objectives To investigate properties of multiple bioelectric impedance signals recorded during congestive heart failure (CHF) by utilizing various electrode configurations of an implanted cardiac resynchronization therapy (CRT) system. Background Monitoring of CHF has relied mainly on right-heart sensors. Methods Fifteen normal dogs underwent implantation of CRT systems using standard leads. An additional left atrial (LA) pressure lead-sensor was implanted in 5 dogs. Continuous rapid right ventricular (RV) pacing was applied over several weeks. Left ventricular (LV) catheterization and echocardiography were performed biweekly. Six steady-state impedance signals, utilizing intrathorcaic and intracardiac vectors, were measured via ring (r), coil (c), and device Can electrodes. Results All animals developed CHF after 2–4 weeks of pacing. Impedance diminished gradually during CHF induction, but at varying rates for different vectors. Impedance during CHF decreased significantly in all measured vectors: LVr-Can, −17%; LVr-RVr, −15%; LVr-RAr, −11%; RVr-Can, −12%; RVc-Can, −7%; RAr-Can, −5%. The LVr-Can vector reflected both the fastest and largest change in impedance in comparison to vectors employing only right-heart electrodes, and was highly reflective of changes in LV end-diastolic volume and LA pressure. Conclusions Impedance, acquired via different lead-electrodes, have variable responses to CHF. Impedance vectors employing a LV lead are highly responsive to physiologic changes during CHF. Measuring multiple impedance signals could be useful for optimizing ambulatory monitoring in heart failure patients. PMID:19298923
Preparation for a first-in-man lentivirus trial in patients with cystic fibrosis
Alton, Eric W F W; Beekman, Jeffery M; Boyd, A Christopher; Brand, June; Carlon, Marianne S; Connolly, Mary M; Chan, Mario; Conlon, Sinead; Davidson, Heather E; Davies, Jane C; Davies, Lee A; Dekkers, Johanna F; Doherty, Ann; Gea-Sorli, Sabrina; Gill, Deborah R; Griesenbach, Uta; Hasegawa, Mamoru; Higgins, Tracy E; Hironaka, Takashi; Hyndman, Laura; McLachlan, Gerry; Inoue, Makoto; Hyde, Stephen C; Innes, J Alastair; Maher, Toby M; Moran, Caroline; Meng, Cuixiang; Paul-Smith, Michael C; Pringle, Ian A; Pytel, Kamila M; Rodriguez-Martinez, Andrea; Schmidt, Alexander C; Stevenson, Barbara J; Sumner-Jones, Stephanie G; Toshner, Richard; Tsugumine, Shu; Wasowicz, Marguerite W; Zhu, Jie
2017-01-01
We have recently shown that non-viral gene therapy can stabilise the decline of lung function in patients with cystic fibrosis (CF). However, the effect was modest, and more potent gene transfer agents are still required. Fuson protein (F)/Hemagglutinin/Neuraminidase protein (HN)-pseudotyped lentiviral vectors are more efficient for lung gene transfer than non-viral vectors in preclinical models. In preparation for a first-in-man CF trial using the lentiviral vector, we have undertaken key translational preclinical studies. Regulatory-compliant vectors carrying a range of promoter/enhancer elements were assessed in mice and human air–liquid interface (ALI) cultures to select the lead candidate; cystic fibrosis transmembrane conductance receptor (CFTR) expression and function were assessed in CF models using this lead candidate vector. Toxicity was assessed and ‘benchmarked’ against the leading non-viral formulation recently used in a Phase IIb clinical trial. Integration site profiles were mapped and transduction efficiency determined to inform clinical trial dose-ranging. The impact of pre-existing and acquired immunity against the vector and vector stability in several clinically relevant delivery devices was assessed. A hybrid promoter hybrid cytosine guanine dinucleotide (CpG)- free CMV enhancer/elongation factor 1 alpha promoter (hCEF) consisting of the elongation factor 1α promoter and the cytomegalovirus enhancer was most efficacious in both murine lungs and human ALI cultures (both at least 2-log orders above background). The efficacy (at least 14% of airway cells transduced), toxicity and integration site profile supports further progression towards clinical trial and pre-existing and acquired immune responses do not interfere with vector efficacy. The lead rSIV.F/HN candidate expresses functional CFTR and the vector retains 90–100% transduction efficiency in clinically relevant delivery devices. The data support the progression of the F/HN-pseudotyped lentiviral vector into a first-in-man CF trial in 2017. PMID:27852956
Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.
Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu
2017-06-30
For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.
Computational Investigation of Fluidic Counterflow Thrust Vectoring
NASA Technical Reports Server (NTRS)
Hunter, Craig A.; Deere, Karen A.
1999-01-01
A computational study of fluidic counterflow thrust vectoring has been conducted. Two-dimensional numerical simulations were run using the computational fluid dynamics code PAB3D with two-equation turbulence closure and linear Reynolds stress modeling. For validation, computational results were compared to experimental data obtained at the NASA Langley Jet Exit Test Facility. In general, computational results were in good agreement with experimental performance data, indicating that efficient thrust vectoring can be obtained with low secondary flow requirements (less than 1% of the primary flow). An examination of the computational flowfield has revealed new details about the generation of a countercurrent shear layer, its relation to secondary suction, and its role in thrust vectoring. In addition to providing new information about the physics of counterflow thrust vectoring, this work appears to be the first documented attempt to simulate the counterflow thrust vectoring problem using computational fluid dynamics.
van Nierop, Pim; Vormer, Tinke L.; Foijer, Floris; Verheij, Joanne; Lodder, Johannes C.; Andersen, Jesper B.; Mansvelder, Huibert D.; te Riele, Hein
2018-01-01
To identify coding and non-coding suppressor genes of anchorage-independent proliferation by efficient loss-of-function screening, we have developed a method for enzymatic production of low complexity shRNA libraries from subtracted transcriptomes. We produced and screened two LEGO (Low-complexity by Enrichment for Genes shut Off) shRNA libraries that were enriched for shRNA vectors targeting coding and non-coding polyadenylated transcripts that were reduced in transformed Mouse Embryonic Fibroblasts (MEFs). The LEGO shRNA libraries included ~25 shRNA vectors per transcript which limited off-target artifacts. Our method identified 79 coding and non-coding suppressor transcripts. We found that taurine-responsive GABAA receptor subunits, including GABRA5 and GABRB3, were induced during the arrest of non-transformed anchor-deprived MEFs and prevented anchorless proliferation. We show that taurine activates chloride currents through GABAA receptors on MEFs, causing seclusion of cell volume in large membrane protrusions. Volume seclusion from cells by taurine correlated with reduced proliferation and, conversely, suppression of this pathway allowed anchorage-independent proliferation. In human cholangiocarcinomas, we found that several proteins involved in taurine signaling via GABAA receptors were repressed. Low GABRA5 expression typified hyperproliferative tumors, and loss of taurine signaling correlated with reduced patient survival, suggesting this tumor suppressive mechanism operates in vivo. PMID:29787571
Optimizing ATLAS code with different profilers
NASA Astrophysics Data System (ADS)
Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.
2014-06-01
After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.
NASA Astrophysics Data System (ADS)
Lohe, M. A.
2018-06-01
We generalize the Watanabe–Strogatz (WS) transform, which acts on the Kuramoto model in d = 2 dimensions, to a higher-dimensional vector transform which operates on vector oscillator models of synchronization in any dimension , for the case of identical frequency matrices. These models have conserved quantities constructed from the cross ratios of inner products of the vector variables, which are invariant under the vector transform, and have trajectories which lie on the unit sphere S d‑1. Application of the vector transform leads to a partial integration of the equations of motion, leaving independent equations to be solved, for any number of nodes N. We discuss properties of complete synchronization and use the reduced equations to derive a stability condition for completely synchronized trajectories on S d‑1. We further generalize the vector transform to a mapping which acts in and in particular preserves the unit ball , and leaves invariant the cross ratios constructed from inner products of vectors in . This mapping can be used to partially integrate a system of vector oscillators with trajectories in , and for d = 2 leads to an extension of the Kuramoto system to a system of oscillators with time-dependent amplitudes and trajectories in the unit disk. We find an inequivalent generalization of the Möbius map which also preserves but leaves invariant a different set of cross ratios, this time constructed from the vector norms. This leads to a different extension of the Kuramoto model with trajectories in the complex plane that can be partially integrated by means of fractional linear transformations.
Geographic Information Systems: A Primer
1990-10-01
AVAILABILITY OF REPORT Approved for public release; distribution 2b DECLASSjFICATION/ DOWNGRADING SCHEDULE unlimited. 4 PERFORMING ORGANIZATION REPORT...utilizing sophisticated integrated databases (usually vector-based), avoid the indirect value coding scheme by recognizing names or direct magnitudes...intricate involvement required by the operator in order to establish a functional coding scheme . A simple raster system, in which cell values indicate
USDA-ARS?s Scientific Manuscript database
Objectives: Newcastle disease virus (NDV), a member of the Paramxoviridae family, has been developed as a vector to express foreign genes for vaccine and gene therapy purposes. The foreign genes are usually inserted into a non-coding region of the NDV genome as an independent transcription unit (ITU...
Modeling Interferometric Structures with Birefringent Elements: A Linear Vector-Space Formalism
2013-11-12
Annapolis, Maryland ViNceNt J. Urick FraNk BUcholtz Photonics Technology Branch Optical Sciences Division i REPORT DOCUMENTATION PAGE Form...a Linear Vector-Space Formalism Nicholas J. Frigo,1 Vincent J. Urick , and Frank Bucholtz Naval Research Laboratory, Code 5650 4555 Overlook Avenue, SW...Annapolis, MD Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited Unclassified Unlimited 29 Vincent J. Urick (202) 767-9352 Coupled mode
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies
NASA Technical Reports Server (NTRS)
Reyhner, T. A.
1982-01-01
An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.
A Comprehensive C++ Controller for a Magnetically Supported Vertical Rotor. 1.0
NASA Technical Reports Server (NTRS)
Morrison, Carlos R.
2001-01-01
This manual describes the new FATMaCC (Five-Axis, Three-Magnetic-Bearing Control Code). The FATMaCC (pronounced "fat mak") is a versatile control code that possesses many desirable features that were not available in previous in-house controllers. The ultimate goal in designing this code was to achieve full rotor levitation and control at a loop time of 50 microsec. Using a 1-GHz processor, the code will control a five-axis system in either a decentralized or a more elegant centralized (modal control) mode at a loop time of 56 microsec. In addition, it will levitate and control (with only minor modification to the input/output wiring) a two-axis and/or a four-axis system. Stable rotor levitation and control of any of the systems mentioned above are accomplished through appropriate key presses to modify parameters, such as stiffness, damping, and bias. A signal generation block provides 11 excitation signals. An excitation signal is then superimposed on the radial bearing x- and y-control signals, thus producing a resultant force vector. By modulating the signals on the bearing x- and y-axes with a cosine and a sine function, respectively, a radial excitation force vector is made to rotate 360 deg. about the bearing geometric center. The rotation of the force vector is achieved manually by using key press or automatically by engaging the "one-per-revolution" feature. Rotor rigid body modes can be excited by using the excitation module. Depending on the polarities of the excitation signal in each radial bearing, the bounce or tilt mode will be excited.
Bearing performance degradation assessment based on time-frequency code features and SOM network
NASA Astrophysics Data System (ADS)
Zhang, Yan; Tang, Baoping; Han, Yan; Deng, Lei
2017-04-01
Bearing performance degradation assessment and prognostics are extremely important in supporting maintenance decision and guaranteeing the system’s reliability. To achieve this goal, this paper proposes a novel feature extraction method for the degradation assessment and prognostics of bearings. Features of time-frequency codes (TFCs) are extracted from the time-frequency distribution using a hybrid procedure based on short-time Fourier transform (STFT) and non-negative matrix factorization (NMF) theory. An alternative way to design the health indicator is investigated by quantifying the similarity between feature vectors using a self-organizing map (SOM) network. On the basis of this idea, a new health indicator called time-frequency code quantification error (TFCQE) is proposed to assess the performance degradation of the bearing. This indicator is constructed based on the bearing real-time behavior and the SOM model that is previously trained with only the TFC vectors under the normal condition. Vibration signals collected from the bearing run-to-failure tests are used to validate the developed method. The comparison results demonstrate the superiority of the proposed TFCQE indicator over many other traditional features in terms of feature quality metrics, incipient degradation identification and achieving accurate prediction. Highlights • Time-frequency codes are extracted to reflect the signals’ characteristics. • SOM network served as a tool to quantify the similarity between feature vectors. • A new health indicator is proposed to demonstrate the whole stage of degradation development. • The method is useful for extracting the degradation features and detecting the incipient degradation. • The superiority of the proposed method is verified using experimental data.
The Adenovirus Genome Contributes to the Structural Stability of the Virion
Saha, Bratati; Wong, Carmen M.; Parks, Robin J.
2014-01-01
Adenovirus (Ad) vectors are currently the most commonly used platform for therapeutic gene delivery in human gene therapy clinical trials. Although these vectors are effective, many researchers seek to further improve the safety and efficacy of Ad-based vectors through detailed characterization of basic Ad biology relevant to its function as a vector system. Most Ad vectors are deleted of key, or all, viral protein coding sequences, which functions to not only prevent virus replication but also increase the cloning capacity of the vector for foreign DNA. However, radical modifications to the genome size significantly decreases virion stability, suggesting that the virus genome plays a role in maintaining the physical stability of the Ad virion. Indeed, a similar relationship between genome size and virion stability has been noted for many viruses. This review discusses the impact of the genome size on Ad virion stability and emphasizes the need to consider this aspect of virus biology in Ad-based vector design. PMID:25254384
NASA Technical Reports Server (NTRS)
Kleb, W. L.
1994-01-01
Steady flow over the leading portion of a multicomponent airfoil section is studied using computational fluid dynamics (CFD) employing an unstructured grid. To simplify the problem, only the inviscid terms are retained from the Reynolds-averaged Navier-Stokes equations - leaving the Euler equations. The algorithm is derived using the finite-volume approach, incorporating explicit time-marching of the unsteady Euler equations to a time-asymptotic, steady-state solution. The inviscid fluxes are obtained through either of two approximate Riemann solvers: Roe's flux difference splitting or van Leer's flux vector splitting. Results are presented which contrast the solutions given by the two flux functions as a function of Mach number and grid resolution. Additional information is presented concerning code verification techniques, flow recirculation regions, convergence histories, and computational resources.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
The Helioseismic and Magnetic Imager (HMI) Vector Magnetic Field Pipeline: Overview and Performance
NASA Astrophysics Data System (ADS)
Hoeksema, J. Todd; Liu, Yang; Hayashi, Keiji; Sun, Xudong; Schou, Jesper; Couvidat, Sebastien; Norton, Aimee; Bobra, Monica; Centeno, Rebecca; Leka, K. D.; Barnes, Graham; Turmon, Michael
2014-09-01
The Helioseismic and Magnetic Imager (HMI) began near-continuous full-disk solar measurements on 1 May 2010 from the Solar Dynamics Observatory (SDO). An automated processing pipeline keeps pace with observations to produce observable quantities, including the photospheric vector magnetic field, from sequences of filtergrams. The basic vector-field frame list cadence is 135 seconds, but to reduce noise the filtergrams are combined to derive data products every 720 seconds. The primary 720 s observables were released in mid-2010, including Stokes polarization parameters measured at six wavelengths, as well as intensity, Doppler velocity, and the line-of-sight magnetic field. More advanced products, including the full vector magnetic field, are now available. Automatically identified HMI Active Region Patches (HARPs) track the location and shape of magnetic regions throughout their lifetime. The vector field is computed using the Very Fast Inversion of the Stokes Vector (VFISV) code optimized for the HMI pipeline; the remaining 180∘ azimuth ambiguity is resolved with the Minimum Energy (ME0) code. The Milne-Eddington inversion is performed on all full-disk HMI observations. The disambiguation, until recently run only on HARP regions, is now implemented for the full disk. Vector and scalar quantities in the patches are used to derive active region indices potentially useful for forecasting; the data maps and indices are collected in the SHARP data series, hmi.sharp_720s. Definitive SHARP processing is completed only after the region rotates off the visible disk; quick-look products are produced in near real time. Patches are provided in both CCD and heliographic coordinates. HMI provides continuous coverage of the vector field, but has modest spatial, spectral, and temporal resolution. Coupled with limitations of the analysis and interpretation techniques, effects of the orbital velocity, and instrument performance, the resulting measurements have a certain dynamic range and sensitivity and are subject to systematic errors and uncertainties that are characterized in this report.
Tesfazghi, Kemi; Hill, Jenny; Jones, Caroline; Ranson, Hilary; Worrall, Eve
2016-02-01
New vector control tools are needed to combat insecticide resistance and reduce malaria transmission. The World Health Organization (WHO) endorses larviciding as a supplementary vector control intervention using larvicides recommended by the WHO Pesticides Evaluation Scheme (WHOPES). The decision to scale-up larviciding in Nigeria provided an opportunity to investigate the factors influencing policy adoption and assess the role that actors and evidence play in the policymaking process, in order to draw lessons that help accelerate the uptake of new methods for vector control. A retrospective policy analysis was carried out using in-depth interviews with national level policy stakeholders to establish normative national vector control policy or strategy decision-making processes and compare these with the process that led to the decision to scale-up larviciding. The interviews were transcribed, then coded and analyzed using NVivo10. Data were coded according to pre-defined themes from an analytical policy framework developed a priori. Stakeholders reported that the larviciding decision-making process deviated from the normative vector control decision-making process. National malaria policy is normally strongly influenced by WHO recommendations, but the potential of larviciding to contribute to national economic development objectives through larvicide production in Nigeria was cited as a key factor shaping the decision. The larviciding decision involved a restricted range of policy actors, and notably excluded actors that usually play advisory, consultative and evidence generation roles. Powerful actors limited the access of some actors to the policy processes and content. This may have limited the influence of scientific evidence in this policy decision. This study demonstrates that national vector control policy change can be facilitated by linking malaria control objectives to wider socioeconomic considerations and through engaging powerful policy champions to drive policy change and thereby accelerate access to new vector control tools. © The Author 2015. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
Takeda, Noriaki; Uno, Atsuhiko; Inohara, Hidenori; Shimada, Shoichi
2016-01-01
Background The mouse is the most commonly used animal model in biomedical research because of recent advances in molecular genetic techniques. Studies related to eye movement in mice are common in fields such as ophthalmology relating to vision, neuro-otology relating to the vestibulo-ocular reflex (VOR), neurology relating to the cerebellum’s role in movement, and psychology relating to attention. Recording eye movements in mice, however, is technically difficult. Methods We developed a new algorithm for analyzing the three-dimensional (3D) rotation vector of eye movement in mice using high-speed video-oculography (VOG). The algorithm made it possible to analyze the gain and phase of VOR using the eye’s angular velocity around the axis of eye rotation. Results When mice were rotated at 0.5 Hz and 2.5 Hz around the earth’s vertical axis with their heads in a 30° nose-down position, the vertical components of their left eye movements were in phase with the horizontal components. The VOR gain was 0.42 at 0.5 Hz and 0.74 at 2.5 Hz, and the phase lead of the eye movement against the turntable was 16.1° at 0.5 Hz and 4.88° at 2.5 Hz. Conclusions To the best of our knowledge, this is the first report of this algorithm being used to calculate a 3D rotation vector of eye movement in mice using high-speed VOG. We developed a technique for analyzing the 3D rotation vector of eye movements in mice with a high-speed infrared CCD camera. We concluded that the technique is suitable for analyzing eye movements in mice. We also include a C++ source code that can calculate the 3D rotation vectors of the eye position from two-dimensional coordinates of the pupil and the iris freckle in the image to this article. PMID:27023859
O'Donnell, David; Sperzel, Johannes; Thibault, Bernard; Rinaldi, Christopher A; Pappone, Carlo; Gutleben, Klaus-Jürgen; Leclercq, Christopher; Razavi, Hedi; Ryu, Kyungmoo; Mcspadden, Luke C; Fischer, Avi; Tomassoni, Gery
2017-04-01
The aim of this study was to evaluate any benefits to the number of viable pacing vectors and maximal spatial coverage with quadripolar left ventricular (LV) leads when compared with tripolar and bipolar equivalents in patients receiving cardiac resynchronization therapy (CRT). A meta-analysis of five previously published clinical trials involving the Quartet™ LV lead (St Jude Medical, St Paul, MN, USA) was performed to evaluate the number of viable pacing vectors defined as capture thresholds ≤2.5 V and no phrenic nerve stimulation and maximal spatial coverage of viable vectors in CRT patients at pre-discharge (n = 370) and first follow-up (n = 355). Bipolar and tripolar lead configurations were modelled by systematic elimination of two and one electrode(s), respectively, from the Quartet lead. The Quartet lead with its four pacing electrodes exhibited the greatest number of pacing vectors per patient when compared with the best bipolar and the best tripolar modelled equivalents. Similarly, the Quartet lead provided the highest spatial coverage in terms of the distance between two furthest viable pacing cathodes when compared with the best bipolar and the best tripolar configurations (P < 0.05). Among the three modelled bipolar configurations, the lead configuration with the two most distal electrodes resulted in the highest number of viable pacing vectors. Among the four modelled tripolar configurations, elimination of the second proximal electrode (M3) resulted in the highest number of viable pacing options per patient. There were no significant differences observed between pre-discharge and first follow-up analyses. The Quartet lead with its four electrodes and the capability to pace from four anatomical locations provided the highest number of viable pacing vectors at pre-discharge and first follow-up visits, providing more flexibility in device programming and enabling continuation of CRT in more patients when compared with bipolar and tripolar equivalents. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Ritz method for transient response in systems having unsymmetric stiffness
NASA Technical Reports Server (NTRS)
Butler, Thomas G.
1989-01-01
The DMAP coding was automated to such an extent by using the device of bubble vectors, that it is useable for analyses in its present form. This feasibility study demonstrates that the Ritz Method is so compelling as to warrant coding its modules in FORTRAN and organizing the resulting coding into a new Rigid Format. Even though this Ritz technique was developed for unsymmetric stiffness matrices, it offers advantages to problems with symmetric stiffnesses. If used for the symmetric case the solution would be simplified to one set of modes, because the adjoint would be the same as the primary. Its advantage in either type of symmetry over a classical eigenvalue modal expansion is that information density per Ritz mode is far richer than per eigenvalue mode; thus far fewer modes would be needed for the same accuracy and every mode would actively participate in the response. Considerable economy can be realized in adapting Ritz vectors for modal solutions. This new Ritz capability now makes NASTRAN even more powerful than before.
Rigorous vector wave propagation for arbitrary flat media
NASA Astrophysics Data System (ADS)
Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.
2017-08-01
Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.
Multitasking the three-dimensional shock wave code CTH on the Cray X-MP/416
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGlaun, J.M.; Thompson, S.L.
1988-01-01
CTH is a software system under development at Sandia National Laboratories Albuquerque that models multidimensional, multi-material, large-deformation, strong shock wave physics. CTH was carefully designed to both vectorize and multitask on the Cray X-MP/416. All of the physics routines are vectorized except the thermodynamics and the interface tracer. All of the physics routines are multitasked except the boundary conditions. The Los Alamos National Laboratory multitasking library was used for the multitasking. The resulting code is easy to maintain, easy to understand, gives the same answers as the unitasked code, and achieves a measured speedup of approximately 3.5 on the fourmore » cpu Cray. This document discusses the design, prototyping, development, and debugging of CTH. It also covers the architecture features of CTH that enhances multitasking, granularity of the tasks, and synchronization of tasks. The utility of system software and utilities such as simulators and interactive debuggers are also discussed. 5 refs., 7 tabs.« less
QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.
Fuks, Benjamin; Shao, Hua-Sheng
2017-01-01
Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.
Observations on Polar Coding with CRC-Aided List Decoding
2016-09-01
9 v 1. INTRODUCTION Polar codes are a new type of forward error correction (FEC) codes, introduced by Arikan in [1], in which he...error correction (FEC) currently used and planned for use in Navy wireless communication systems. The project’s results from FY14 and FY15 are...good error- correction per- formance. We used the Tal/Vardy method of [5]. The polar encoder uses a row vector u of length N . Let uA be the subvector
Analysis and Simulation of Narrowband GPS Jamming Using Digital Excision Temporal Filtering.
1994-12-01
the sequence of stored values from the P- code sampled at a 20 MHz rate. When correlated with a reference vector of the same length to simulate a GPS ...rate required for the GPS signals, (20 MHz sampling rate for the P- code signal), the personal computer (PC) used run the simulation could not perform...This subroutine is used to perform a fast FFT based 168 biased cross correlation . Written by Capt Gerry Falen, USAF, 16 AUG 94 % start of code
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.; Collins, Stuart A., Jr.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Habiby, S F; Collins, S A
1987-11-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Thrust vectoring for lateral-directional stability
NASA Technical Reports Server (NTRS)
Peron, Lee R.; Carpenter, Thomas
1992-01-01
The advantages and disadvantages of using thrust vectoring for lateral-directional control and the effects of reducing the tail size of a single-engine aircraft were investigated. The aerodynamic characteristics of the F-16 aircraft were generated by using the Aerodynamic Preliminary Analysis System II panel code. The resulting lateral-directional linear perturbation analysis of a modified F-16 aircraft with various tail sizes and yaw vectoring was performed at several speeds and altitudes to determine the stability and control trends for the aircraft compared to these trends for a baseline aircraft. A study of the paddle-type turning vane thrust vectoring control system as used on the National Aeronautics and Space Administration F/A-18 High Alpha Research Vehicle is also presented.
Alpert, Carl-Alfred; Crutz-Le Coq, Anne-Marie; Malleret, Christine; Zagorec, Monique
2003-01-01
The complete nucleotide sequence of the 13-kb plasmid pRV500, isolated from Lactobacillus sakei RV332, was determined. Sequence analysis enabled the identification of genes coding for a putative type I restriction-modification system, two genes coding for putative recombinases of the integrase family, and a region likely involved in replication. The structural features of this region, comprising a putative ori segment containing 11- and 22-bp repeats and a repA gene coding for a putative initiator protein, indicated that pRV500 belongs to the pUCL287 subfamily of theta-type replicons. A 3.7-kb fragment encompassing this region was fused to an Escherichia coli replicon to produce the shuttle vector pRV566 and was observed to be functional in L. sakei for plasmid replication. The L. sakei replicon alone could not support replication in E. coli. Plasmid pRV500 and its derivative pRV566 were determined to be at very low copy numbers in L. sakei. pRV566 was maintained at a reasonable rate over 20 generations in several lactobacilli, such as Lactobacillus curvatus, Lactobacillus casei, and Lactobacillus plantarum, in addition to L. sakei, making it an interesting basis for developing vectors. Sequence relationships with other plasmids are described and discussed. PMID:12957947
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
The evolution of plant virus transmission pathways.
Hamelin, Frédéric M; Allen, Linda J S; Prendeville, Holly R; Hajimorad, M Reza; Jeger, Michael J
2016-05-07
The evolution of plant virus transmission pathways is studied through transmission via seed, pollen, or a vector. We address the questions: under what circumstances does vector transmission make pollen transmission redundant? Can evolution lead to the coexistence of multiple virus transmission pathways? We restrict the analysis to an annual plant population in which reproduction through seed is obligatory. A semi-discrete model with pollen, seed, and vector transmission is formulated to investigate these questions. We assume vector and pollen transmission rates are frequency-dependent and density-dependent, respectively. An ecological stability analysis is performed for the semi-discrete model and used to inform an evolutionary study of trade-offs between pollen and seed versus vector transmission. Evolutionary dynamics critically depend on the shape of the trade-off functions. Assuming a trade-off between pollen and vector transmission, evolution either leads to an evolutionarily stable mix of pollen and vector transmission (concave trade-off) or there is evolutionary bi-stability (convex trade-off); the presence of pollen transmission may prevent evolution of vector transmission. Considering a trade-off between seed and vector transmission, evolutionary branching and the subsequent coexistence of pollen-borne and vector-borne strains is possible. This study contributes to the theory behind the diversity of plant-virus transmission patterns observed in nature. Copyright © 2016 Elsevier Ltd. All rights reserved.
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.
Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.
Kocher, Arthur; Gantier, Jean-Charles; Holota, Hélène; Jeziorski, Céline; Coissac, Eric; Bañuls, Anne-Laure; Girod, Romain; Gaborit, Pascal; Murienne, Jérôme
2016-11-01
The nearly complete mitochondrial genome of Lutzomyia umbratilis Ward & Fraiha, 1977 (Psychodidae: Phlebotominae), considered as the main vector of Leishmania guyanensis, is presented. The sequencing has been performed on an Illumina Hiseq 2500 platform, with a genome skimming strategy. The full nuclear ribosomal RNA segment was also assembled. The mitogenome of L. umbratilis was determined to be at least 15,717 bp-long and presents an architecture found in many mitogenomes of insect (13 protein-coding genes, 22 transfer RNAs, two ribosomal RNAs, and one non-coding region also referred as the control region). The control region contains a large repeated element of c. 370 bp and a poly-AT region of unknown length. This is the first mitogenome of Psychodidae to be described.
Combined group ECC protection and subgroup parity protection
Gara, Alan G.; Chen, Dong; Heidelberger, Philip; Ohmacht, Martin
2013-06-18
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit wide vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.
Experimental Validation of a Coupled Fluid-Multibody Dynamics Model for Tanker Trucks
2007-11-08
order to accurately predict the dynamic response of tanker trucks, the model must accurately account for the following effects : • Incompressible...computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of motion. Many commercial...position vector, τ is the deviatoric stress tensor, D is the rate of deformation tensor, f r is the body force vector, r is the artificial
Dryden/Edwards 1994 Thrust-Vectoring Aircraft Fleet - F-18 HARV, X-31, F-16 MATV
NASA Technical Reports Server (NTRS)
1994-01-01
The three thrust-vectoring aircraft at Edwards, California, each capable of flying at extreme angles of attack, cruise over the California desert in formation during flight in March 1994. They are, from left, NASA's F-18 High Alpha Research Vehicle (HARV), flown by the NASA Dryden Flight Research Center; the X-31, flown by the X-31 International Test Organization (ITO) at Dryden; and the Air Force F-16 Multi-Axis Thrust Vectoring (MATV) aircraft. All three aircraft were flown in different programs and were developed independently. The NASA F-18 HARV was a testbed to produce aerodynamic data at high angles of attack to validate computer codes and wind tunnel research. The X-31 was used to study thrust vectoring to enhance close-in air combat maneuvering, while the F-16 MATV was a demonstration of how thrust vectoring could be applied to operational aircraft.
Computational Study of Fluidic Thrust Vectoring using Separation Control in a Nozzle
NASA Technical Reports Server (NTRS)
Deere, Karen; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.
2003-01-01
A computational investigation of a two- dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. The structured-grid, computational fluid dynamics code PAB3D was used to guide the design and analyze over 60 configurations. Nozzle design variables included cavity convergence angle, cavity length, fluidic injection angle, upstream minimum height, aft deck angle, and aft deck shape. All simulations were computed with a static freestream Mach number of 0.05. a nozzle pressure ratio of 3.858, and a fluidic injection flow rate equal to 6 percent of the primary flow rate. Results indicate that the recessed cavity enhances the throat shifting method of fluidic thrust vectoring and allows for greater thrust-vector angles without compromising thrust efficiency.
Extending the length and time scales of Gram-Schmidt Lyapunov vector computations
NASA Astrophysics Data System (ADS)
Costa, Anthony B.; Green, Jason R.
2013-08-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
NASA Astrophysics Data System (ADS)
Becciolini, Diego; Franzosi, Diogo Buarque; Foadi, Roshan; Frandsen, Mads T.; Hapola, Tuomas; Sannino, Francesco
2015-07-01
We analyze the Large Hadron Collider (LHC) phenomenology of heavy vector resonances with a S U (2 )L×S U (2 )R spectral global symmetry. This symmetry partially protects the electroweak S parameter from large contributions of the vector resonances. The resulting custodial vector model spectrum and interactions with the standard model fields lead to distinct signatures at the LHC in the diboson, dilepton, and associated Higgs channels.
NASA Astrophysics Data System (ADS)
Dean, Cleon E.; Braselton, James P.
2004-05-01
Color-coded and vector-arrow grid representations of the Poynting vector field are used to show the energy flow in and around a fluid-loaded elastic cylindrical shell for both forward- and backward-propagating waves. The present work uses a method adapted from a simpler technique due to Kaduchak and Marston [G. Kaduchak and P. L. Marston, ``Traveling-wave decomposition of surface displacements associated with scattering by a cylindrical shell: Numerical evaluation displaying guided forward and backward wave properties,'' J. Acoust. Soc. Am. 98, 3501-3507 (1995)] to isolate unidirectional energy flows.
Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Parisi, G.; Parisi, L.
2011-06-01
We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.
Understanding the Cray X1 System
NASA Technical Reports Server (NTRS)
Cheung, Samson
2004-01-01
This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform.
Indovina, Paola; Collini, Maddalena; Chirico, Giuseppe; Santini, Maria Teresa
2007-02-20
Hypoxia through HRE (hypoxia-responsive element) activity in MG-63 human osteosarcoma cells grown in monolayer and as very small, three-dimensional tumor spheroids was investigated using molecular imaging techniques. MG-63 cells were stably transfected with a vector constructed with multiple copies of the HRE sequence of the human vascular endothelial growth factor (VEGF) gene and with the enhanced green fluorescent protein (EGFP) coding sequence. During hypoxia when HIF-1alpha (hypoxia-inducible factor-1alpha) is stabilized, the binding of HIF-1 to the HRE sequences of the vector allows the transcription of EGFP and the appearance of fluorescence. Transfected monolayer cells were characterized by flow cytometric analysis in response to various hypoxic conditions and HIF-1alpha expression in these cells was assessed by Western blotting. Two-photon excitation (TPE) microscopy was then used to examine both MG-63-transfected monolayer cells and spheroids at 2 and 5 days of growth in normoxic conditions. Monolayer cells reveal almost no fluorescence, whereas even very small spheroids (<100 microm) after 2 days of growth contain regions of high fluorescence. For the first time in the literature, at least to our knowledge, it is demonstrated, using highly sensitive and non-perturbing molecular imaging techniques, that three-dimensional cell organization leads to almost immediate HRE activation. This activation of the HRE sequences, which control a wide variety of genes, suggests that monolayer cells and spheroids of the MG-63 cell line have different genes activated and thus diverse functional activities.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
The Biermann catastrophe of numerical MHD
NASA Astrophysics Data System (ADS)
Graziani, C.; Tzeferacos, P.; Lee, D.; Lamb, D. Q.; Weide, K.; Fatenejad, M.; Miller, J.
2016-05-01
The Biermann Battery effect is frequently invoked in cosmic magnetogenesis and studied in High-Energy Density laboratory physics experiments. Unfortunately, direct implementation of the Biermann effect in MHD codes is known to produce unphysical magnetic fields at shocks whose value does not converge with resolution. We show that this convergence breakdown is due to naive discretization, which fails to account for the fact that discretized irrotational vector fields have spurious solenoidal components that grow without bound near a discontinuity. We show that careful consideration of the kinetics of ion viscous shocks leads to a formulation of the Biermann effect that gives rise to a convergent algorithm. We note a novel physical effect a resistive magnetic precursor in which Biermann-generated field in the shock “leaks” resistively upstream. The effect appears to be potentially observable in experiments at laser facilities.
Urbanization, land tenure security and vector-borne Chagas disease.
Levy, Michael Z; Barbu, Corentin M; Castillo-Neyra, Ricardo; Quispe-Machaca, Victor R; Ancca-Juarez, Jenny; Escalante-Mejia, Patricia; Borrini-Mayori, Katty; Niemierko, Malwina; Mabud, Tarub S; Behrman, Jere R; Naquira-Velarde, Cesar
2014-08-22
Modern cities represent one of the fastest growing ecosystems on the planet. Urbanization occurs in stages; each stage characterized by a distinct habitat that may be more or less susceptible to the establishment of disease vector populations and the transmission of vector-borne pathogens. We performed longitudinal entomological and epidemiological surveys in households along a 1900 × 125 m transect of Arequipa, Peru, a major city of nearly one million inhabitants, in which the transmission of Trypanosoma cruzi, the aetiological agent of Chagas disease, by the insect vector Triatoma infestans, is an ongoing problem. The transect spans a cline of urban development from established communities to land invasions. We find that the vector is tracking the development of the city, and the parasite, in turn, is tracking the dispersal of the vector. New urbanizations are free of vector infestation for decades. T. cruzi transmission is very recent and concentrated in more established communities. The increase in land tenure security during the course of urbanization, if not accompanied by reasonable and enforceable zoning codes, initiates an influx of construction materials, people and animals that creates fertile conditions for epidemics of some vector-borne diseases. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Reconstruction of an 8-lead surface ECG from two subcutaneous ICD vectors.
Wilson, David G; Cronbach, Peter L; Panfilo, D; Greenhut, Saul E; Stegemann, Berthold P; Morgan, John M
2017-06-01
Techniques exist which allow surface ECGs to be reconstructed from reduced lead sets. We aimed to reconstruct an 8-lead ECG from two independent S-ICD sensing electrodes vectors as proof of this principle. Participants with ICDs (N=61) underwent 3minute ECGs using a TMSi Porti7 multi-channel signal recorder (TMS international, The Netherlands) with electrodes in the standard S-ICD and 12-lead positions. Participants were randomised to either a training (N=31) or validation (N=30) group. The transformation used was a linear combination of the 2 independent S-ICD vectors to each of the 8 independent leads of the 12-lead ECG, with coefficients selected that minimized the root mean square error (RMSE) between recorded and derived ECGs when applied to the training group. The transformation was then applied to the validation group and agreement between the recorded and derived lead pairs was measured by Pearson correlation coefficient (r) and normalised RMSE (NRMSE). In total, 27 patients with complete data sets were included in the validation set consisting of 57,888 data points from 216 full lead sets. The distribution of the r and NRMSE were skewed. Mean r=0.770 (SE 0.024), median r=0.925. NRMSE mean=0.233 (SE 0.015) median=0.171. We have demonstrated that the reconstruction of an 8-lead ECG from two S-ICD vectors is possible. If perfected, the ability to generate accurate multi-lead surface ECG data from an S-ICD would potentially allow recording and review of clinical arrhythmias at follow-up. Copyright © 2017 Elsevier B.V. All rights reserved.
A CRISPR Path to Engineering New Genetic Mouse Models for Cardiovascular Research
Miano, Joseph M.; Zhu, Qiuyu Martin; Lowenstein, Charles J.
2016-01-01
Previous efforts to target the mouse genome for the addition, subtraction, or substitution of biologically informative sequences required complex vector design and a series of arduous steps only a handful of labs could master. The facile and inexpensive clustered regularly interspaced short palindromic repeats (CRISPR) method has now superseded traditional means of genome modification such that virtually any lab can quickly assemble reagents for developing new mouse models for cardiovascular research. Here we briefly review the history of CRISPR in prokaryotes, highlighting major discoveries leading to its formulation for genome modification in the animal kingdom. Core components of CRISPR technology are reviewed and updated. Practical pointers for two-component and three-component CRISPR editing are summarized with a number of applications in mice including frameshift mutations, deletion of enhancers and non-coding genes, nucleotide substitution of protein-coding and gene regulatory sequences, incorporation of loxP sites for conditional gene inactivation, and epitope tag integration. Genotyping strategies are presented and topics of genetic mosaicism and inadvertent targeting discussed. Finally, clinical applications and ethical considerations are addressed as the biomedical community eagerly embraces this astonishing innovation in genome editing to tackle previously intractable questions. PMID:27102963
Vector-Borne Bacterial Plant Pathogens: Interactions with Hemipteran Insects and Plants
Perilla-Henao, Laura M.; Casteel, Clare L.
2016-01-01
Hemipteran insects are devastating pests of crops due to their wide host range, rapid reproduction, and ability to transmit numerous plant-infecting pathogens as vectors. While the field of plant–virus–vector interactions has flourished in recent years, plant–bacteria–vector interactions remain poorly understood. Leafhoppers and psyllids are by far the most important vectors of bacterial pathogens, yet there are still significant gaps in our understanding of their feeding behavior, salivary secretions, and plant responses as compared to important viral vectors, such as whiteflies and aphids. Even with an incomplete understanding of plant–bacteria–vector interactions, some common themes have emerged: (1) all known vector-borne bacteria share the ability to propagate in the plant and insect host; (2) particular hemipteran families appear to be incapable of transmitting vector-borne bacteria; (3) all known vector-borne bacteria have highly reduced genomes and coding capacity, resulting in host-dependence; and (4) vector-borne bacteria encode proteins that are essential for colonization of specific hosts, though only a few types of proteins have been investigated. Here, we review the current knowledge on important vector-borne bacterial pathogens, including Xylella fastidiosa, Spiroplasma spp., Liberibacter spp., and ‘Candidatus Phytoplasma spp.’. We then highlight recent approaches used in the study of vector-borne bacteria. Finally, we discuss the application of this knowledge for control and future directions that will need to be addressed in the field of vector–plant–bacteria interactions. PMID:27555855
NASA Astrophysics Data System (ADS)
Hui, L.; Behr, F.-J.; Schröder, D.
2006-10-01
The dissemination of digital geospatial data is available now on mobile devices such as PDAs (personal digital assistants) and smart-phones etc. The mobile devices which support J2ME (Java 2 Micro Edition) offer users and developers one open interface, which they can use to develop or download the software according their own demands. Currently WMS (Web Map Service) can afford not only traditional raster image, but also the vector image. SVGT (Scalable Vector Graphics Tiny) is one subset of SVG (Scalable Vector Graphics) and because of its precise vector information, original styling and small file size, SVGT format is fitting well for the geographic mapping purpose, especially for the mobile devices which has bandwidth net connection limitation. This paper describes the development of a cartographic client for the mobile devices, using SVGT and J2ME technology. Mobile device will be simulated on the desktop computer for a series of testing with WMS, for example, send request and get the responding data from WMS and then display both vector and raster format image. Analyzing and designing of System structure such as user interface and code structure are discussed, the limitation of mobile device should be taken into consideration for this applications. The parsing of XML document which is received from WMS after the GetCapabilities request and the visual realization of SVGT and PNG (Portable Network Graphics) image are important issues in codes' writing. At last the client was tested on Nokia S40/60 mobile phone successfully.
Comment on "Chiral gauge field and axial anomaly in a Weyl semimetal"
NASA Astrophysics Data System (ADS)
Zhang, Kai; Zhang, Erhu; Zhang, Shengli
2017-12-01
In Liu et al. [Phys. Rev. B 87, 235306 (2013), 10.1103/PhysRevB.87.235306], the authors obtain that the cross coupling between vector gauge field and chiral gauge field can lead to the anomaly of vector current. We demonstrate that this anomaly is not a physical effect. On one hand, it can be regulated out by the proper regulation. On the other hand, it leads to unjustifiable results, the breaking of the vector gauge symmetry and the ambiguous boundary current. Moreover, the effects associated with anomaly of vector current are understood by random phase approximation (RPA) in the paper we comment on. We point out that the RPA cannot describe the effects resulting from the quantum anomaly.
[Construction of the superantigen SEA transfected laryngocarcinoma cells].
Ji, Xiaobin; Jingli, J V; Liu, Qicai; Xie, Jinghua
2013-04-01
To construct an eukaryotic expression vectors containing superantigen staphylococcal enterotoxin A (SEA) gene, and to identify its expression in laryngeal squamous carcinoma cells. SEA full-length gene fragment was obtained from ATCC13565 genome of the staphylococcus, referencing standard strains producing SEA. Coding sequence of SEA was artificially synthetized. Than, SEA gene fragments was subcloned into eukaryotic expression vector pIRES-EGFP. The recombinant plasmid pSEA-IRES-EGFP was constructed and was transfected to laryngocarcinoma Hep-2 cells. Resistant clones were screened by G418. The expression of SEA in laryngocarcinoma cells was identified with ELISA and RT-PCR method. The subclone of artificially synthetized SEA gene was subclone to eukaryotic expression vector pires-EGFP. Flanking sequence confirmed that SEA sequence was fully identical to the coding sequence of standard staphylococcus strains ATCC13565 in Genbank. After recombinant plasmid transfected to laryngocarcinoma cells, the resistant clones was obtained after screening for two weeks. The clones were selected. The specific gene fragment was obtained by RT-PCR amplification. ELISA assay confirmed that the content of SEA protein in supernatant fluid of cell culture had reached about Pg level. The recombinant eukaryotic expression vector containing superantigen SEA gene is successfully constructed, and is capable of effective expression and continued secretion of SEA protein in laryngochrcinoma Hep-2 cells after recombinant plasmid transfected to laryngocarcinoma cells.
A simple device to illustrate the Einthoven triangle.
Jin, Benjamin E; Wulff, Heike; Widdicombe, Jonathan H; Zheng, Jie; Bers, Donald M; Puglisi, Jose L
2012-12-01
The Einthoven triangle is central to the field of electrocardiography, but the concept of cardiac vectors is often a difficult notion for students to grasp. To illustrate this principle, we constructed a device that recreates the conditions of an ECG reading using a battery to simulate the electrical vector of the heart and three voltmeters for the main electrocardiographic leads. Requiring minimal construction with low cost, this device provides hands-on practice that enables students to rediscover the principles of the Einthoven triangle, namely, that the direction of the cardiac dipole can be predicted from the deflections in any two leads and that lead I + lead III = lead II independent of the position of heart's electrical vector. We built a total of 6 devices for classes of 30 students and tested them in the first-year Human Physiology course at the University of California-Davis School of Medicine. Combined with traditional demonstrations with ECG machines, this equipment demonstrated its ability to help medical students obtain a solid foundation of the basic principles of electrocardiography.
lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.
Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia
2015-01-01
Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states
NASA Astrophysics Data System (ADS)
Tan, Si-Hui; Ouyang, Yingkai; Rohde, Peter P.
2018-04-01
We present a scheme for implementing homomorphic encryption on coherent states encoded using phase-shift keys. The encryption operations require only rotations in phase space, which commute with computations in the code space performed via passive linear optics, and with generalized nonlinear phase operations that are polynomials of the photon-number operator in the code space. This encoding scheme can thus be applied to any computation with coherent-state inputs, and the computation proceeds via a combination of passive linear optics and generalized nonlinear phase operations. An example of such a computation is matrix multiplication, whereby a vector representing coherent-state amplitudes is multiplied by a matrix representing a linear optics network, yielding a new vector of coherent-state amplitudes. By finding an orthogonal partitioning of the support of our encoded states, we quantify the security of our scheme via the indistinguishability of the encrypted code words. While we focus on coherent-state encodings, we expect that this phase-key encoding technique could apply to any continuous-variable computation scheme where the phase-shift operator commutes with the computation.
Cohen, Aaron M
2008-01-01
We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.
Cloning of cellulase genes from acidothermus cellulolyticus
Lastick, deceased, Stanley M.; Tucker, Melvin P.; Grohmann, Karel
1996-01-01
A process is described for moving fragments that code for cellulase activity from the genome of A. cellulolyticus to several plasmid vectors and the subsequent expression of active cellulase acitivty in E. coli.
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the 'loop unrolling' technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large-scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
A parallel-vector algorithm for rapid structural analysis on high-performance computers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.
1990-01-01
A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.
Combined group ECC protection and subgroup parity protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Cheng, Dong; Heidelberger, Philip
A method and system are disclosed for providing combined error code protection and subgroup parity protection for a given group of n bits. The method comprises the steps of identifying a number, m, of redundant bits for said error protection; and constructing a matrix P, wherein multiplying said given group of n bits with P produces m redundant error correction code (ECC) protection bits, and two columns of P provide parity protection for subgroups of said given group of n bits. In the preferred embodiment of the invention, the matrix P is constructed by generating permutations of m bit widemore » vectors with three or more, but an odd number of, elements with value one and the other elements with value zero; and assigning said vectors to rows of the matrix P.« less
Automation of the guiding center expansion
NASA Astrophysics Data System (ADS)
Burby, J. W.; Squire, J.; Qin, H.
2013-07-01
We report on the use of the recently developed Mathematica package VEST (Vector Einstein Summation Tools) to automatically derive the guiding center transformation. Our Mathematica code employs a recursive procedure to derive the transformation order-by-order. This procedure has several novel features. (1) It is designed to allow the user to easily explore the guiding center transformation's numerous non-unique forms or representations. (2) The procedure proceeds entirely in cartesian position and velocity coordinates, thereby producing manifestly gyrogauge invariant results; the commonly used perpendicular unit vector fields e1,e2 are never even introduced. (3) It is easy to apply in the derivation of higher-order contributions to the guiding center transformation without fear of human error. Our code therefore stands as a useful tool for exploring subtle issues related to the physics of toroidal momentum conservation in tokamaks.
Automation of The Guiding Center Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. W. Burby, J. Squire and H. Qin
2013-03-19
We report on the use of the recently-developed Mathematica package VEST (Vector Einstein Summation Tools) to automatically derive the guiding center transformation. Our Mathematica code employs a recursive procedure to derive the transformation order-by-order. This procedure has several novel features. (1) It is designed to allow the user to easily explore the guiding center transformation's numerous nonunique forms or representations. (2) The procedure proceeds entirely in cartesian position and velocity coordinates, thereby producing manifestly gyrogauge invariant results; the commonly-used perpendicular unit vector fields e1, e2 are never even introduced. (3) It is easy to apply in the derivation of higher-ordermore » contributions to the guiding center transformation without fear of human error. Our code therefore stands as a useful tool for exploring subtle issues related to the physics of toroidal momentum conservation in tokamaks« less
Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps
NASA Technical Reports Server (NTRS)
Gerson, Ira A.; Jasiuk, Mark A.
1990-01-01
Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.
Expression of short hairpin RNAs using the compact architecture of retroviral microRNA genes.
Burke, James M; Kincaid, Rodney P; Aloisio, Francesca; Welch, Nicole; Sullivan, Christopher S
2017-09-29
Short hairpin RNAs (shRNAs) are effective in generating stable repression of gene expression. RNA polymerase III (RNAP III) type III promoters (U6 or H1) are typically used to drive shRNA expression. While useful for some knockdown applications, the robust expression of U6/H1-driven shRNAs can induce toxicity and generate heterogeneous small RNAs with undesirable off-target effects. Additionally, typical U6/H1 promoters encompass the majority of the ∼270 base pairs (bp) of vector space required for shRNA expression. This can limit the efficacy and/or number of delivery vector options, particularly when delivery of multiple gene/shRNA combinations is required. Here, we develop a compact shRNA (cshRNA) expression system based on retroviral microRNA (miRNA) gene architecture that uses RNAP III type II promoters. We demonstrate that cshRNAs coded from as little as 100 bps of total coding space can precisely generate small interfering RNAs (siRNAs) that are active in the RNA-induced silencing complex (RISC). We provide an algorithm with a user-friendly interface to design cshRNAs for desired target genes. This cshRNA expression system reduces the coding space required for shRNA expression by >2-fold as compared to the typical U6/H1 promoters, which may facilitate therapeutic RNAi applications where delivery vector space is limiting. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Application of Polarization to the MODIS Aerosol Retrieval Over Land
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Remer, Lorraine R.; Kaufman, Yoram J.
2004-01-01
Reflectance measurements in the visible and infrared wavelengths, from the Moderate Resolution Imaging Spectroradiometer (MODIS), are used to derive aerosol optical thicknesses (AOT) and aerosol properties over land surfaces. The measured spectral reflectance is compared with lookup tables, containing theoretical reflectance calculated by radiative transfer (RT) code. Specifically, this RT code calculates top of the atmosphere (TOA) intensities based on a scalar treatment of radiation, neglecting the effects of polarization. In the red and near infrared (NIR) wavelengths the use of the scalar RT code is of sufficient accuracy to model TOA reflectance. However, in the blue, molecular and aerosol scattering dominate the TOA signal. Here, polarization effects can be large, and should be included in the lookup table derivation. Using a RT code that allows for both vector and scalar calculations, we examine the reflectance differences at the TOA, with and without polarization. We find that the differences in blue channel TOA reflectance (vector - scalar) may reach values of 0.01 or greater, depending on the sun/surface/sensor scattering geometry. Reflectance errors of this magnitude translate to AOT differences of 0.1, which is a very large error, especially when the actual AOT is low. As a result of this study, the next version of aerosol retrieval from MODIS over land will include polarization.
Development of a GPU Compatible Version of the Fast Radiation Code RRTMG
NASA Astrophysics Data System (ADS)
Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.
2012-12-01
The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Farmer, R. C.
1992-01-01
A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.
Yoshida, Kimiko; Goto, Naoko; Ohnami, Shumpei; Aoki, Kazunori
2012-01-01
The targeting of gene transfer at the cell-entry level is one of the most attractive challenges in vector development. However, attempts to redirect adenovirus vectors to alternative receptors by engineering the capsid-coding region have shown limited success, because the proper targeting ligands on the cells of interest are generally unknown. To overcome this limitation, we have constructed a random peptide library displayed on the adenoviral fiber knob, and have successfully selected targeted vectors by screening the library on cancer cell lines in vitro. The infection of targeted vectors was considered to be mediated by specific receptors on target cells. However, the expression levels and kinds of cell surface receptors may be substantially different between in vitro culture and in vivo tumor tissue. Here, we screened the peptide display-adenovirus library in the peritoneal dissemination model of AsPC-1 pancreatic cancer cells. The vector displaying a selected peptide (PFWSGAV) showed higher infectivity in the AsPC-1 peritoneal tumors but not in organs and other peritoneal tumors as compared with a non-targeted vector. Furthermore, the infectivity of the PFWSGAV-displaying vector for AsPC-1 peritoneal tumors was significantly higher than that of a vector displaying a peptide selected by in vitro screening, indicating the usefulness of in vivo screening in exploring the targeting vectors. This vector-screening system can facilitate the development of targeted adenovirus vectors for a variety of applications in medicine. PMID:23029088
Non-coaxial superposition of vector vortex beams.
Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P
2016-02-10
Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.
Vector and Axial-Vector Current Correlators Within the Instanton Model of QCD Vacuum
NASA Astrophysics Data System (ADS)
Dorokhov, A. E.
2005-08-01
The pion electric polarizability, α {π ^ ± }E , the leading order hadronic contribution to the muon anomalous magnetic moment, aμ hvp(1) , and the ratio of the V - A and V + A correlators are found within the instanton model of QCD vacuum. The results are compared with phenomenological estimates of these quantities from the ALEPH and OPAL data on vector and axial-vector spectral densities.
Cloning of cellulase genes from Acidothermus cellulolyticus
Lastick, S.M.; Tucker, M.P.; Grohmann, K.
1996-05-07
A process is described for moving fragments that code for cellulase activity from the genome of A. cellulolyticus to several plasmid vectors and the subsequent expression of active cellulase activity in E. coli. 5 figs.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
CFD code evaluation for internal flow modeling
NASA Technical Reports Server (NTRS)
Chung, T. J.
1990-01-01
Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.
Equilibrium Spline Interface (ESI) for magnetic confinement codes
NASA Astrophysics Data System (ADS)
Li, Xujing; Zakharov, Leonid E.
2017-12-01
A compact and comprehensive interface between magneto-hydrodynamic (MHD) equilibrium codes and gyro-kinetic, particle orbit, MHD stability, and transport codes is presented. Its irreducible set of equilibrium data consists of three (in the 2-D case with occasionally one extra in the 3-D case) functions of coordinates and four 1-D radial profiles together with their first and mixed derivatives. The C reconstruction routines, accessible also from FORTRAN, allow the calculation of basis functions and their first derivatives at any position inside the plasma and in its vicinity. After this all vector fields and geometric coefficients, required for the above mentioned types of codes, can be calculated using only algebraic operations with no further interpolation or differentiation.
Forman, Michael A; Young, Derek
2012-09-18
Examples of methods for generating data based on a communications channel are described. In one such example, a processing unit may generate a first vector representation based in part on at least two characteristics of a communications channel. A constellation having at least two dimensions may be addressed with the first vector representation to identify a first symbol associated with the first vector representation. The constellation represents a plurality of regions, each region associated with a respective symbol. The symbol may be used to generate data, which may stored in an electronic storage medium and used as a cryptographic key or a spreading code or hopping sequence in a modulation technique.
A border-ownership model based on computational electromagnetism.
Zainal, Zaem Arif; Satoh, Shunji
2018-03-01
The mathematical relation between a vector electric field and its corresponding scalar potential field is useful to formulate computational problems of lower/middle-order visual processing, specifically related to the assignment of borders to the side of the object: so-called border ownership (BO). BO coding is a key process for extracting the objects from the background, allowing one to organize a cluttered scene. We propose that the problem is solvable simultaneously by application of a theorem of electromagnetism, i.e., "conservative vector fields have zero rotation, or "curl." We hypothesize that (i) the BO signal is definable as a vector electric field with arrowheads pointing to the inner side of perceived objects, and (ii) its corresponding scalar field carries information related to perceived order in depth of occluding/occluded objects. A simple model was developed based on this computational theory. Model results qualitatively agree with object-side selectivity of BO-coding neurons, and with perceptions of object order. The model update rule can be reproduced as a plausible neural network that presents new interpretations of existing physiological results. Results of this study also suggest that T-junction detectors are unnecessary to calculate depth order. Copyright © 2017 Elsevier Ltd. All rights reserved.
The organization of conspecific face space in nonhuman primates
Parr, Lisa A.; Taubert, Jessica; Little, Anthony C.; Hancock, Peter J. B.
2013-01-01
Humans and chimpanzees demonstrate numerous cognitive specializations for processing faces, but comparative studies with monkeys suggest that these may be the result of recent evolutionary adaptations. The present study utilized the novel approach of face space, a powerful theoretical framework used to understand the representation of face identity in humans, to further explore species differences in face processing. According to the theory, faces are represented by vectors in a multidimensional space, the centre of which is defined by an average face. Each dimension codes features important for describing a face’s identity, and vector length codes the feature’s distinctiveness. Chimpanzees and rhesus monkeys discriminated male and female conspecifics’ faces, rated by humans for their distinctiveness, using a computerized task. Multidimensional scaling analyses showed that the organization of face space was similar between humans and chimpanzees. Distinctive faces had the longest vectors and were the easiest for chimpanzees to discriminate. In contrast, distinctiveness did not correlate with the performance of rhesus monkeys. The feature dimensions for each species’ face space were visualized and described using morphing techniques. These results confirm species differences in the perceptual representation of conspecific faces, which are discussed within an evolutionary framework. PMID:22670823
NASA Astrophysics Data System (ADS)
Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong
2014-11-01
Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.
Extending the length and time scales of Gram–Schmidt Lyapunov vector computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less
Acute evaluation of transthoracic impedance vectors using ICD leads.
Gottfridsson, Christer; Daum, Douglas; Kennergren, Charles; Ramuzat, Agnès; Willems, Roger; Edvardsson, Nils
2009-06-01
Minute ventilation (MV) has been proven to be very useful in rate responsive pacing. The aim of this study was to evaluate the feasibility of using implantable cardioverter-defibrillator (ICD) leads as part of the MV detection system. At implant in 10 patients, the transthoracic impedance was measured from tripolar ICD, tetrapolar ICD, and atrial lead vectors during normal, deep, and shallow voluntary respiration. MV and respiration rate (RespR) were simultaneously measured through a facemask with a pneumotachometer (Korr), and the correlations with impedance-based measurements were calculated. Air sensitivity was the change in impedance per change in respiratory tidal volume, ohms (Omega)/liter (L), and the signal-to-noise ratio (SNR) was the ratio of the respiratory and cardiac contraction components. The air sensitivity and SNR in tripolar ICD vector were 2.70 +/- 2.73 ohm/L and 2.19 +/- 1.31, respectively, and were not different from tetrapolar. The difference in RespR between tripolar ICD and Korr was 0.2 +/- 1.91 breaths/minute. The regressed correlation coefficient between impedance MV and Korr MV was 0.86 +/- 0.07 in tripolar ICD. The air sensitivity and SNR in tripolar and tetrapolar ICD lead vectors did not differ significantly and were in the range of the values in pacemaker leads currently used as MV sensors. The good correlations between impedance-based and Korr-based RespR and MV measurements imply that ICD leads may be used in MV sensor systems.
Spin Rotation of Formalism for Spin Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luccio,A.
The problem of which coefficients are adequate to correctly represent the spin rotation in vector spin tracking for polarized proton and deuteron beams in synchrotrons is here re-examined in the light of recent discussions. The main aim of this note is to show where some previous erroneous results originated and how to code spin rotation in a tracking code. Some analysis of a recent experiment is presented that confirm the correctness of the assumptions.
Carbon Nanotube Growth Rate Regression using Support Vector Machines and Artificial Neural Networks
2014-03-27
intensity D peak. Reprinted with permission from [38]. The SVM classifier is trained using custom written Java code leveraging the Sequential Minimal...Society Encog is a machine learning framework for Java , C++ and .Net applications that supports Bayesian Networks, Hidden Markov Models, SVMs and ANNs [13...SVM classifiers are trained using Weka libraries and leveraging custom written Java code. The data set is created as an Attribute Relationship File
Vector radiative transfer code SORD: Performance analysis and quick start guide
NASA Astrophysics Data System (ADS)
Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Alexander; Holben, Brent; Kokhanovsky, Alexander
2017-10-01
We present a new open source polarized radiative transfer code SORD written in Fortran 90/95. SORD numerically simulates propagation of monochromatic solar radiation in a plane-parallel atmosphere over a reflecting surface using the method of successive orders of scattering (hence the name). Thermal emission is ignored. We did not improve the method in any way, but report the accuracy and runtime in 52 benchmark scenarios. This paper also serves as a quick start user's guide for the code available from ftp://maiac.gsfc.nasa.gov/pub/skorkin, from the JQSRT website, or from the corresponding (first) author.
Kolehmainen, Christine; Brennan, Meghan; Filut, Amarette; Isaac, Carol; Carnes, Molly
2014-09-01
Ineffective leadership during cardiopulmonary resuscitation ("code") can negatively affect a patient's likelihood of survival. In most teaching hospitals, internal medicine residents lead codes. In this study, the authors explored internal medicine residents' experiences leading codes, with a particular focus on how gender influences the code leadership experience. The authors conducted individual, semistructured telephone or in-person interviews with 25 residents (May 2012 to February 2013) from 9 U.S. internal medicine residency programs. They audio recorded and transcribed the interviews and then thematically analyzed the transcribed text. Participants viewed a successful code as one with effective leadership. They agreed that the ideal code leader was an authoritative presence; spoke with a deep, loud voice; used clear, direct communication; and appeared calm. Although equally able to lead codes as their male colleagues, female participants described feeling stress from having to violate gender behavioral norms in the role of code leader. In response, some female participants adopted rituals to signal the suspension of gender norms while leading a code. Others apologized afterwards for their counternormative behavior. Ideal code leadership embodies highly agentic, stereotypical male behaviors. Female residents employed strategies to better integrate the competing identities of code leader and female gender. In the future, residency training should acknowledge how female gender stereotypes may conflict with the behaviors required to enact code leadership and offer some strategies, such as those used by the female residents in this study, to help women integrate these dual identities.
Lu, Jiamiao; Williams, James A.; Luke, Jeremy; Zhang, Feijie; Chu, Kirk; Kay, Mark A.
2017-01-01
We previously developed a mini-intronic plasmid (MIP) expression system in which the essential bacterial elements for plasmid replication and selection are placed within an engineered intron contained within a universal 5′ UTR noncoding exon. Like minicircle DNA plasmids (devoid of bacterial backbone sequences), MIP plasmids overcome transcriptional silencing of the transgene. However, in addition MIP plasmids increase transgene expression by 2 and often >10 times higher than minicircle vectors in vivo and in vitro. Based on these findings, we examined the effects of the MIP intronic sequences in a recombinant adeno-associated virus (AAV) vector system. Recombinant AAV vectors containing an intron with a bacterial replication origin and bacterial selectable marker increased transgene expression by 40 to 100 times in vivo when compared with conventional AAV vectors. Therefore, inclusion of this noncoding exon/intron sequence upstream of the coding region can substantially enhance AAV-mediated gene expression in vivo. PMID:27903072
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
Wavelet subband coding of computer simulation output using the A++ array class library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.
1995-07-01
The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using amore » bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.« less
A Radiation Chemistry Code Based on the Greens Functions of the Diffusion Equation
NASA Technical Reports Server (NTRS)
Plante, Ianik; Wu, Honglu
2014-01-01
Ionizing radiation produces several radiolytic species such as.OH, e-aq, and H. when interacting with biological matter. Following their creation, radiolytic species diffuse and chemically react with biological molecules such as DNA. Despite years of research, many questions on the DNA damage by ionizing radiation remains, notably on the indirect effect, i.e. the damage resulting from the reactions of the radiolytic species with DNA. To simulate DNA damage by ionizing radiation, we are developing a step-by-step radiation chemistry code that is based on the Green's functions of the diffusion equation (GFDE), which is able to follow the trajectories of all particles and their reactions with time. In the recent years, simulations based on the GFDE have been used extensively in biochemistry, notably to simulate biochemical networks in time and space and are often used as the "gold standard" to validate diffusion-reaction theories. The exact GFDE for partially diffusion-controlled reactions is difficult to use because of its complex form. Therefore, the radial Green's function, which is much simpler, is often used. Hence, much effort has been devoted to the sampling of the radial Green's functions, for which we have developed a sampling algorithm This algorithm only yields the inter-particle distance vector length after a time step; the sampling of the deviation angle of the inter-particle vector is not taken into consideration. In this work, we show that the radial distribution is predicted by the exact radial Green's function. We also use a technique developed by Clifford et al. to generate the inter-particle vector deviation angles, knowing the inter-particle vector length before and after a time step. The results are compared with those predicted by the exact GFDE and by the analytical angular functions for free diffusion. This first step in the creation of the radiation chemistry code should help the understanding of the contribution of the indirect effect in the formation of DNA damage and double-strand breaks.
Higher-order vector beams produced by photonic-crystal lasers.
Iwahashi, Seita; Kurosaka, Yoshitaka; Sakai, Kyosuke; Kitamura, Kyoko; Takayama, Naoki; Noda, Susumu
2011-06-20
We have successfully generated vector beams with higher-order polarization states using photonic-crystal lasers. We have analyzed and designed lattice structures that provide cavity modes with different symmetries. Fabricated devices based on these lattice structures produced doughnut-shaped vector beams, with symmetries corresponding to the cavity modes. Our study enables the systematic analysis of vector beams, which we expect will lead to applications such as high-resolution microscopy, laser processing, and optical trapping.
Sub-block motion derivation for merge mode in HEVC
NASA Astrophysics Data System (ADS)
Chien, Wei-Jung; Chen, Ying; Chen, Jianle; Zhang, Li; Karczewicz, Marta; Li, Xiang
2016-09-01
The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. In this paper, two additional merge candidates, advanced temporal motion vector predictor and spatial-temporal motion vector predictor, are developed to improve motion information prediction scheme under the HEVC structure. The proposed method allows each Prediction Unit (PU) to fetch multiple sets of motion information from multiple blocks smaller than the current PU. By splitting a large PU into sub-PUs and filling motion information for all the sub-PUs of the large PU, signaling cost of motion information could be reduced. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. Simulation results show that 2.4% performance improvement over HEVC can be achieved.
Locally adaptive vector quantization: Data compression with feature preservation
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Sayano, M.
1992-01-01
A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.; Watson, Brian C.
1992-02-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures
Fung, J.; Aulwes, R. T.; Bement, M. T.; ...
2015-07-14
This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less
SAPNEW: Parallel finite element code for thin shell structures on the Alliant FX/80
NASA Technical Reports Server (NTRS)
Kamat, Manohar P.; Watson, Brian C.
1992-01-01
The results of a research activity aimed at providing a finite element capability for analyzing turbo-machinery bladed-disk assemblies in a vector/parallel processing environment are summarized. Analysis of aircraft turbofan engines is very computationally intensive. The performance limit of modern day computers with a single processing unit was estimated at 3 billions of floating point operations per second (3 gigaflops). In view of this limit of a sequential unit, performance rates higher than 3 gigaflops can be achieved only through vectorization and/or parallelization as on Alliant FX/80. Accordingly, the efforts of this critically needed research were geared towards developing and evaluating parallel finite element methods for static and vibration analysis. A special purpose code, named with the acronym SAPNEW, performs static and eigen analysis of multi-degree-of-freedom blade models built-up from flat thin shell elements.
Automation of the guiding center expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burby, J. W.; Squire, J.; Qin, H.
2013-07-15
We report on the use of the recently developed Mathematica package VEST (Vector Einstein Summation Tools) to automatically derive the guiding center transformation. Our Mathematica code employs a recursive procedure to derive the transformation order-by-order. This procedure has several novel features. (1) It is designed to allow the user to easily explore the guiding center transformation's numerous non-unique forms or representations. (2) The procedure proceeds entirely in cartesian position and velocity coordinates, thereby producing manifestly gyrogauge invariant results; the commonly used perpendicular unit vector fields e{sub 1},e{sub 2} are never even introduced. (3) It is easy to apply in themore » derivation of higher-order contributions to the guiding center transformation without fear of human error. Our code therefore stands as a useful tool for exploring subtle issues related to the physics of toroidal momentum conservation in tokamaks.« less
The Advanced Software Development and Commercialization Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallopoulos, E.; Canfield, T.R.; Minkoff, M.
1990-09-01
This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time,more » on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.« less
Gene transfer to promote cardiac regeneration.
Collesi, Chiara; Giacca, Mauro
2016-12-01
There is an impelling need to develop new therapeutic strategies for patients with myocardial infarction and heart failure. Leading from the large quantity of new information gathered over the last few years on the mechanisms controlling cardiomyocyte proliferation during embryonic and fetal life, it is now possible to devise innovative therapies based on cardiac gene transfer. Different protein-coding genes controlling cell cycle progression or cardiomyocyte specification and differentiation, along with microRNA mimics and inhibitors regulating pre-natal and early post-natal cell proliferation, are amenable to transformation in potential therapeutics for cardiac regeneration. These gene therapy approaches are conceptually revolutionary, since they are aimed at stimulating the intrinsic potential of differentiated cardiac cells to proliferate, rather than relying on the implantation of exogenously expanded cells to achieve tissue regeneration. For efficient and prolonged cardiac gene transfer, vectors based on the Adeno-Associated Virus stand as safe, efficient and reliable tools for cardiac gene therapy applications.
Recent Advances in Preclinical Developments Using Adenovirus Hybrid Vectors.
Ehrke-Schulz, Eric; Zhang, Wenli; Gao, Jian; Ehrhardt, Anja
2017-10-01
Adenovirus (Ad)-based vectors are efficient gene-transfer vehicles to deliver foreign DNA into living organisms, offering large cargo capacity and low immunogenicity and genotoxicity. As Ad shows low integration rates of their genomes into host chromosomes, vector-derived gene expression decreases due to continuous cell cycling in regenerating tissues and dividing cell populations. To overcome this hurdle, adenoviral delivery can be combined with mechanisms leading to maintenance of therapeutic DNA and long-term effects of the desired treatment. Several hybrid Ad vectors (AdV) exploiting various strategies for long-term treatment have been developed and characterized. This review summarizes recent developments of preclinical approaches using hybrid AdVs utilizing either the Sleeping Beauty transposase system for somatic integration into host chromosomes or designer nucleases, including transcription activator-like effector nucleases and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein-9 nuclease for permanent gene editing. Further options on how to optimize these vectors further are discussed, which may lead to future clinical applications of these versatile gene-therapy tools.
A simple device to illustrate the Einthoven triangle
Jin, Benjamin E.; Wulff, Heike; Widdicombe, Jonathan H.; Zheng, Jie; Bers, Donald M.
2012-01-01
The Einthoven triangle is central to the field of electrocardiography, but the concept of cardiac vectors is often a difficult notion for students to grasp. To illustrate this principle, we constructed a device that recreates the conditions of an ECG reading using a battery to simulate the electrical vector of the heart and three voltmeters for the main electrocardiographic leads. Requiring minimal construction with low cost, this device provides hands-on practice that enables students to rediscover the principles of the Einthoven triangle, namely, that the direction of the cardiac dipole can be predicted from the deflections in any two leads and that lead I + lead III = lead II independent of the position of heart's electrical vector. We built a total of 6 devices for classes of 30 students and tested them in the first-year Human Physiology course at the University of California-Davis School of Medicine. Combined with traditional demonstrations with ECG machines, this equipment demonstrated its ability to help medical students obtain a solid foundation of the basic principles of electrocardiography. PMID:23209014
Charge-reversal Lipids, Peptide-based Lipids, and Nucleoside-based Lipids for Gene Delivery
LaManna, Caroline M.; Lusic, Hrvoje; Camplo, Michel; McIntosh, Thomas J.; Barthélémy, Philippe; Grinstaff, Mark W.
2013-01-01
Conspectus Twenty years after gene therapy was introduced in the clinic, advances in the technique continue to garner headlines as successes pique the interest of clinicians, researchers, and the public. Gene therapy’s appeal stems from its potential to revolutionize modern medical therapeutics by offering solutions to a myriad of diseases by tailoring the treatment to a specific individual’s genetic code. Both viral and non-viral vectors have been used in the clinic, but the low transfection efficiencies when utilizing non-viral vectors have lead to an increased focus on engineering new gene delivery vectors. To address the challenges facing non-viral or synthetic vectors, specifically lipid-based carriers, we have focused on three main themes throughout our research: 1) that releasing the nucleic acid from the carrier will increase gene transfection; 2) that utilizing biologically inspired designs, such as DNA binding proteins, to create lipids with peptide-based headgroups will improve delivery; and 3) that mimicking the natural binding patterns observed within DNA, by using lipids having a nucleoside headgroup, will give unique supramolecular assembles with high transfection efficiency. The results presented in this Account demonstrate that cellular uptake and transfection efficacy can be improved by engineering the chemical components of the lipid vectors to enhance nucleic acid binding and release kinetics. Specifically, our research has shown that the incorporation of a charge-reversal moiety to initiate change of the lipid from positive to negative net charge during the transfection process improves transfection. In addition, by varying the composition of the spacer (rigid, flexible, short, long, and aromatic) between the cationic headgroup and the hydrophobic chains, lipids can be tailored to interact with different nucleic acids (DNA, RNA, siRNA) and accordingly affect delivery, uptake outcomes, and transfection efficiency. Introduction of a peptide headgroup into the lipid provides a mechanism to affect the binding of the lipid to the nucleic acid, to influence the supramolecular lipoplex structure, and to enhance gene transfection activity. Lastly, we discuss the in-vitro successes we have had when using lipids possessing a nucleoside headgroup to create unique self-assembled structures and to deliver DNA to cells. In this Account, we state our hypotheses and design elements as well as describe the techniques that we have utilized in our research, in order to provide readers with the tools to characterize and engineer new vectors. PMID:22439686
Production of SV40-derived vectors.
Strayer, David S; Mitchell, Christine; Maier, Dawn A; Nichols, Carmen N
2010-06-01
Recombinant simian virus 40 (rSV40)-derived vectors are particularly useful for gene delivery to bone marrow progenitor cells and their differentiated derivatives, certain types of epithelial cells (e.g., hepatocytes), and central nervous system neurons and microglia. They integrate rapidly into cellular DNA to provide long-term gene expression in vitro and in vivo in both resting and dividing cells. Here we describe a protocol for production and purification of these vectors. These procedures require only packaging cells (e.g., COS-7) and circular vector genome DNA. Amplification involves repeated infection of packaging cells with vector produced by transfection. Cotransfection is not required in any step. Viruses are purified by centrifugation using discontinuous sucrose or cesium chloride (CsCl) gradients and resulting vectors are replication-incompetent and contain no detectable wild-type SV40 revertants. These approaches are simple, give reproducible results, and may be used to generate vectors that are deleted only for large T antigen (Tag), or for all SV40-coding sequences capable of carrying up to 5 kb of foreign DNA. These vectors are best applied to long-term expression of proteins normally encoded by mammalian cells or by viruses that infect mammalian cells, or of untranslated RNAs (e.g., RNA interference). The preparative approaches described facilitate application of these vectors and allow almost any laboratory to exploit their strengths for diverse gene delivery applications.
Naval Observatory Vector Astrometry Software (NOVAS) Version 3.1, Introducing a Python Edition
NASA Astrophysics Data System (ADS)
Barron, Eric G.; Kaplan, G. H.; Bangert, J.; Bartlett, J. L.; Puatua, W.; Harris, W.; Barrett, P.
2011-01-01
The Naval Observatory Vector Astrometry Software (NOVAS) is a source-code library that provides common astrometric quantities and transformations. NOVAS calculations are accurate at the sub-milliarcsecond level. The library can supply, in one or two subroutine or function calls, the instantaneous celestial position of any star or planet in a variety of coordinate systems. NOVAS also provides access to all of the building blocks that go into such computations. NOVAS Version 3.1 introduces a Python edition alongside the Fortran and C editions. The Python edition uses the computational code from the C edition and, currently, mimics the function calls of the C edition. Future versions will expand the functionality of the Python edition to harness the object-oriented nature of the Python language, and will implement the ability to handle large quantities of objects or observers using the array functionality in NumPy (a third-party scientific package for Python). NOVAS 3.1 also adds a module to transform GCRS vectors to the ITRS; the ITRS to GCRS transformation was already provided in NOVAS 3.0. The module that corrects an ITRS vector for polar motion has been modified to undo that correction upon demand. In the C edition, the ephemeris-access functions have been revised for use on 64-bit systems and for improved performance in general. NOVAS, including documentation, is available from the USNO website (http://www.usno.navy.mil/USNO/astronomical-applications/software-products/novas).
Anthropogenic disturbance and the risk of flea-borne disease transmission
Megan M. Friggens; Paul Beier
2010-01-01
Anthropogenic disturbance may lead to the spread of vector-borne diseases through effects on pathogens, vectors, and hosts. Identifying the type and extent of vector response to habitat change will enable better and more accurate management strategies for anthropogenic disease spread. We compiled and analyzed data from published empirical studies to test for patterns...
Exclusive photoproduction of vector mesons in proton-lead ultraperipheral collisions at the LHC
NASA Astrophysics Data System (ADS)
Xie, Ya-Ping; Chen, Xurong
2018-02-01
Rapidity distributions of vector mesons are computed in dipole model proton-lead ultraperipheral collisions (UPCs) at the CERN Larger Hadron Collider (LHC). The dipole model framework is implemented in the calculations of cross sections in the photon-hadron interaction. The bCGC model and Boosted Gaussian wave functions are employed in the scattering amplitude. We obtain predictions of rapidity distributions of J / ψ meson proton-lead ultraperipheral collisions. The predictions give a good description to the experimental data of ALICE. The rapidity distributions of ϕ, ω and ψ (2 s) mesons in proton-lead ultraperipheral collisions are also presented in this paper.
The small non-coding RNA response to virus infection in the Leishmania vector Lutzomyia longipalpis.
Ferreira, Flávia Viana; Aguiar, Eric Roberto Guimarães Rocha; Olmo, Roenick Proveti; de Oliveira, Karla Pollyanna Vieira; Silva, Emanuele Guimarães; Sant'Anna, Maurício Roberto Viana; Gontijo, Nelder de Figueiredo; Kroon, Erna Geessien; Imler, Jean Luc; Marques, João Trindade
2018-06-01
Sandflies are well known vectors for Leishmania but also transmit a number of arthropod-borne viruses (arboviruses). Few studies have addressed the interaction between sandflies and arboviruses. RNA interference (RNAi) mechanisms utilize small non-coding RNAs to regulate different aspects of host-pathogen interactions. The small interfering RNA (siRNA) pathway is a broad antiviral mechanism in insects. In addition, at least in mosquitoes, another RNAi mechanism mediated by PIWI interacting RNAs (piRNAs) is activated by viral infection. Finally, endogenous microRNAs (miRNA) may also regulate host immune responses. Here, we analyzed the small non-coding RNA response to Vesicular stomatitis virus (VSV) infection in the sandfly Lutzoymia longipalpis. We detected abundant production of virus-derived siRNAs after VSV infection in adult sandflies. However, there was no production of virus-derived piRNAs and only mild changes in the expression of vector miRNAs in response to infection. We also observed abundant production of virus-derived siRNAs against two other viruses in Lutzomyia Lulo cells. Together, our results suggest that the siRNA but not the piRNA pathway mediates an antiviral response in sandflies. In agreement with this hypothesis, pre-treatment of cells with dsRNA against VSV was able to inhibit viral replication while knock-down of the central siRNA component, Argonaute-2, led to increased virus levels. Our work begins to elucidate the role of RNAi mechanisms in the interaction between L. longipalpis and viruses and should also open the way for studies with other sandfly-borne pathogens.
User's Manual for FEMOM3DR. Version 1.0
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
FEMoM3DR is a computer code written in FORTRAN 77 to compute radiation characteristics of antennas on 3D body using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. The code is written to handle different feeding structures like coaxial line, rectangular waveguide, and circular waveguide. This code uses the tetrahedral elements, with vector edge basis functions for FEM and triangular elements with roof-top basis functions for MoM. By virtue of FEM, this code can handle any arbitrary shaped three dimensional bodies with inhomogeneous lossy materials; and due to MoM the computational domain can be terminated in any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.
Video streaming with SHVC to HEVC transcoding
NASA Astrophysics Data System (ADS)
Gudumasu, Srinivas; He, Yuwen; Ye, Yan; Xiu, Xiaoyu
2015-09-01
This paper proposes an efficient Scalable High efficiency Video Coding (SHVC) to High Efficiency Video Coding (HEVC) transcoder, which can reduce the transcoding complexity significantly, and provide a desired trade-off between the transcoding complexity and the transcoded video quality. To reduce the transcoding complexity, some of coding information, such as coding unit (CU) depth, prediction mode, merge mode, motion vector information, intra direction information and transform unit (TU) depth information, in the SHVC bitstream are mapped and transcoded to single layer HEVC bitstream. One major difficulty in transcoding arises when trying to reuse the motion information from SHVC bitstream since motion vectors referring to inter-layer reference (ILR) pictures cannot be reused directly in transcoding. Reusing motion information obtained from ILR pictures for those prediction units (PUs) will reduce the complexity of the SHVC transcoder greatly but a significant reduction in the quality of the picture is observed. Pictures corresponding to the intra refresh pictures in the base layer (BL) will be coded as P pictures in enhancement layer (EL) in the SHVC bitstream; and directly reusing the intra information from the BL for transcoding will not get a good coding efficiency. To solve these problems, various transcoding technologies are proposed. The proposed technologies offer different trade-offs between transcoding speed and transcoding quality. They are implemented on the basis of reference software SHM-6.0 and HM-14.0 for the two layer spatial scalability configuration. Simulations show that the proposed SHVC software transcoder reduces the transcoding complexity by up to 98-99% using low complexity transcoding mode when compared with cascaded re-encoding method. The transcoder performance at various bitrates with different transcoding modes are compared in terms of transcoding speed and transcoded video quality.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
International standards: the World Organisation for Animal Health Terrestrial Animal Health Code.
Thiermann, A B
2015-04-01
This paper provides a description of the international standards contained in the TerrestrialAnimal Health Code of the World Organisation for Animal Health (OIE) that relate to the prevention and control of vector-borne diseases. It identifies the rights and obligations of OIE Member Countries regarding the notification of animal disease occurrences, as well as the recommendations to be followed for a safe and efficient international trade of animals and their products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wasserman, H.J.
1996-02-01
The second generation of the Digital Equipment Corp. (DEC) DECchip Alpha AXP microprocessor is referred to as the 21164. From the viewpoint of numerically-intensive computing, the primary difference between it and its predecessor, the 21064, is that the 21164 has twice the multiply/add throughput per clock period (CP), a maximum of two floating point operations (FLOPS) per CP vs. one for 21064. The AlphaServer 8400 is a shared-memory multiprocessor server system that can accommodate up to 12 CPUs and up to 14 GB of memory. In this report we will compare single processor performance of the 8400 system with thatmore » of the International Business Machines Corp. (IBM) RISC System/6000 POWER-2 microprocessor running at 66 MHz, the Silicon Graphics, Inc. (SGI) MIPS R8000 microprocessor running at 75 MHz, and the Cray Research, Inc. CRAY J90. The performance comparison is based on a set of Fortran benchmark codes that represent a portion of the Los Alamos National Laboratory supercomputer workload. The advantage of using these codes, is that the codes also span a wide range of computational characteristics, such as vectorizability, problem size, and memory access pattern. The primary disadvantage of using them is that detailed, quantitative analysis of performance behavior of all codes on all machines is difficult. One important addition to the benchmark set appears for the first time in this report. Whereas the older version was written for a vector processor, the newer version is more optimized for microprocessor architectures. Therefore, we have for the first time, an opportunity to measure performance on a single application using implementations that expose the respective strengths of vector and superscalar architecture. All results in this report are from single processors. A subsequent article will explore shared-memory multiprocessing performance of the 8400 system.« less
Kato, Hirotomo; Jochim, Ryan C.; Gomez, Eduardo A.; Sakoda, Ryo; Iwata, Hiroyuki; Valenzuela, Jesus G.; Hashiguchi, Yoshihisa
2010-01-01
Triatoma (T.) dimidiata is a hematophagous Hemiptera and a main vector of Chagas disease. The saliva of this and other blood-sucking insects contains potent pharmacologically active components that assist them in counteracting the host hemostatic and inflammatory systems during blood feeding. To describe the repertoire of potential bioactive salivary molecules from this insect, a number of randomly selected transcripts from the salivary gland cDNA library of T. dimidiata were sequenced and analyzed. This analysis showed that 77.5% of the isolated transcripts coded for putative secreted proteins, and 89.9% of these coded for variants of the lipocalin family proteins. The most abundant transcript was a homologue of procalin, the major allergen of T. protracta saliva, and contributed more than 50% of the transcripts coding for putative secreted proteins, suggesting that it may play an important role in the blood-feeding process. Other salivary transcripts encoding lipocalin family proteins had homology to triabin (a thrombin inhibitor), triafestin (an inhibitor of kallikrein–kinin system), pallidipin (an inhibitor of collagen-induced platelet aggregation) and others with unknown function. PMID:19900580
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pingenot, J; Rieben, R; White, D
2004-12-06
We present a computational study of signal propagation and attenuation of a 200 MHz dipole antenna in a cave environment. The cave is modeled as a straight and lossy random rough wall. To simulate a broad frequency band, the full wave Maxwell equations are solved directly in the time domain via a high order vector finite element discretization using the massively parallel CEM code EMSolve. The simulation is performed for a series of random meshes in order to generate statistical data for the propagation and attenuation properties of the cave environment. Results for the power spectral density and phase ofmore » the electric field vector components are presented and discussed.« less
Discrete Data Transfer Technique for Fluid-Structure Interaction
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2007-01-01
This paper presents a general three-dimensional algorithm for data transfer between dissimilar meshes. The algorithm is suitable for applications of fluid-structure interaction and other high-fidelity multidisciplinary analysis and optimization. Because the algorithm is independent of the mesh topology, we can treat structured and unstructured meshes in the same manner. The algorithm is fast and accurate for transfer of scalar or vector fields between dissimilar surface meshes. The algorithm is also applicable for the integration of a scalar field (e.g., coefficients of pressure) on one mesh and injection of the resulting vectors (e.g., force vectors) onto another mesh. The author has implemented the algorithm in a C++ computer code. This paper contains a complete formulation of the algorithm with a few selected results.
NASA Technical Reports Server (NTRS)
Bacon, Barton J.; Carzoo, Susan W.; Davidson, John B.; Hoffler, Keith D.; Lallman, Frederick J.; Messina, Michael D.; Murphy, Patrick C.; Ostroff, Aaron J.; Proffitt, Melissa S.; Yeager, Jessie C.;
1996-01-01
Specifications for a flight control law are delineated in sufficient detail to support coding the control law in flight software. This control law was designed for implementation and flight test on the High-Alpha Research Vehicle (HARV), which is an F/A-18 aircraft modified to include an experimental multi-axis thrust-vectoring system and actuated nose strakes for enhanced rolling (ANSER). The control law, known as the HARV ANSER Control Law, was designed to utilize a blend of conventional aerodynamic control effectors, thrust vectoring, and actuated nose strakes to provide increased agility and good handling qualities throughout the HARV flight envelope, including angles of attack up to 70 degrees.
Morral, Núria; O’Neal, Wanda; Rice, Karen; Leland, Michele; Kaplan, Johanne; Piedra, Pedro A.; Zhou, Heshan; Parks, Robin J.; Velji, Rizwan; Aguilar-Córdova, Estuardo; Wadsworth, Samuel; Graham, Frank L.; Kochanek, Stefan; Carey, K. Dee; Beaudet, Arthur L.
1999-01-01
The efficiency of first-generation adenoviral vectors as gene delivery tools is often limited by the short duration of transgene expression, which can be related to immune responses and to toxic effects of viral proteins. In addition, readministration is usually ineffective unless the animals are immunocompromised or a different adenovirus serotype is used. Recently, adenoviral vectors devoid of all viral coding sequences (helper-dependent or gutless vectors) have been developed to avoid expression of viral proteins. In mice, liver-directed gene transfer with AdSTK109, a helper-dependent adenoviral (Ad) vector containing the human α1-antitrypsin (hAAT) gene, resulted in sustained expression for longer than 10 months with negligible toxicity to the liver. In the present report, we have examined the duration of expression of AdSTK109 in the liver of baboons and compared it to first-generation vectors expressing hAAT. Transgene expression was limited to approximately 3–5 months with the first-generation vectors. In contrast, administration of AdSTK109 resulted in transgene expression for longer than a year in two of three baboons. We have also investigated the feasibility of circumventing the humoral response to the virus by sequential administration of vectors of different serotypes. We found that the ineffectiveness of readministration due to the humoral response to an Ad5 first-generation vector was overcome by use of an Ad2-based vector expressing hAAT. These data suggest that long-term expression of transgenes should be possible by combining the reduced immunogenicity and toxicity of helper-dependent vectors with sequential delivery of vectors of different serotypes. PMID:10536005
User's Manual for FEMOM3DS. Version 1.0
NASA Technical Reports Server (NTRS)
Reddy, C.J.; Deshpande, M. D.
1997-01-01
FEMOM3DS is a computer code written in FORTRAN 77 to compute electromagnetic(EM) scattering characteristics of a three dimensional object with complex materials using combined Finite Element Method (FEM)/Method of Moments (MoM) technique. This code uses the tetrahedral elements, with vector edge basis functions for FEM in the volume of the cavity and the triangular elements with the basis functions similar to that described for MoM at the outer boundary. By virtue of FEM, this code can handle any arbitrarily shaped three-dimensional cavities filled with inhomogeneous lossy materials. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computers on which the code is intended to run.
Applications of Signal Processing in Digital Communications. Appendix.
1991-04-01
we get a separable alphabet with 128 vectors (see Fig. 3). Among It is seen that the effect of R on a two-dimencional vector hem, 32 have energy 3c 2...and 96 have energy b2 + 2c 2. is to rotate it by an angle 21r/M, and the effect of T is to The average energy is 1. exchange its components. This...Sh, S, are two elements of It, the intradistance set associ- generating the code are d.ose associated to nlane ritalins ated with the coset Si! Include
A highly optimized vectorized code for Monte Carlo simulations of SU(3) lattice gauge theories
NASA Technical Reports Server (NTRS)
Barkai, D.; Moriarty, K. J. M.; Rebbi, C.
1984-01-01
New methods are introduced for improving the performance of the vectorized Monte Carlo SU(3) lattice gauge theory algorithm using the CDC CYBER 205. Structure, algorithm and programming considerations are discussed. The performance achieved for a 16(4) lattice on a 2-pipe system may be phrased in terms of the link update time or overall MFLOPS rates. For 32-bit arithmetic, it is 36.3 microsecond/link for 8 hits per iteration (40.9 microsecond for 10 hits) or 101.5 MFLOPS.
Lattice QCD calculation using VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seyong; Ohta, Shigemi
1995-02-01
A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6 GFLOPS peak speed and 256 MB memory, connected by a crossbar switch with 400 MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1 GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8 GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix.
Kolehmainen, Christine; Brennan, Meghan; Filut, Amarette; Isaac, Carol; Carnes, Molly
2014-01-01
Purpose Ineffective leadership during cardiopulmonary resuscitation (“code”) can negatively affect a patient’s likelihood of survival. In most teaching hospitals, internal medicine residents lead codes. In this study, the authors explored internal medicine residents’ experiences leading codes, with a particular focus on how gender influences the code leadership experience. Method The authors conducted individual, semi-structured telephone or in-person interviews with 25 residents (May 2012 to February 2013) from 9 U.S. internal medicine residency programs. They audio recorded and transcribed the interviews then thematically analyzed the transcribed text. Results Participants viewed a successful code as one with effective leadership. They agreed that the ideal code leader was an authoritative presence; spoke with a deep, loud voice; used clear, direct communication; and appeared calm. Although equally able to lead codes as their male colleagues, female participants described feeling stress from having to violate gender behavioral norms in the role of code leader. In response, some female participants adopted rituals to signal the suspension of gender norms while leading a code. Others apologized afterwards for their counter normative behavior. Conclusions Ideal code leadership embodies highly agentic, stereotypical male behaviors. Female residents employed strategies to better integrate the competing identities of code leader and female gender. In the future, residency training should acknowledge how female gender stereotypes may conflict with the behaviors required to enact code leadership and offer some strategies, such as those used by the female residents in this study, to help women integrate these dual identities. PMID:24979289
Accuracy comparison among different machine learning techniques for detecting malicious codes
NASA Astrophysics Data System (ADS)
Narang, Komal
2016-03-01
In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.
Low-complexity video encoding method for wireless image transmission in capsule endoscope.
Takizawa, Kenichi; Hamaguchi, Kiyoshi
2010-01-01
This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Jonathan A.
2015-12-02
This code implements the GloVe algorithm for learning word vectors from a text corpus. It uses a modern C++ approach. This algorithm is described in the open literature in the referenced paper by Pennington, Jeffrey, Richard Socher, and Christopher D. Manning.
Applications of Support Vector Machine (SVM) Learning in Cancer Genomics
HUANG, SHUJUN; CAI, NIANGUANG; PACHECO, PEDRO PENZUTI; NARANDES, SHAVIRA; WANG, YANG; XU, WAYNE
2017-01-01
Machine learning with maximization (support) of separating margin (vector), called support vector machine (SVM) learning, is a powerful classification tool that has been used for cancer genomic classification or subtyping. Today, as advancements in high-throughput technologies lead to production of large amounts of genomic and epigenomic data, the classification feature of SVMs is expanding its use in cancer genomics, leading to the discovery of new biomarkers, new drug targets, and a better understanding of cancer driver genes. Herein we reviewed the recent progress of SVMs in cancer genomic studies. We intend to comprehend the strength of the SVM learning and its future perspective in cancer genomic applications. PMID:29275361
Kratzer, Markus; Lasnik, Michael; Röhrig, Sören; Teichert, Christian; Deluca, Marco
2018-01-11
Lead zirconate titanate (PZT) is one of the prominent materials used in polycrystalline piezoelectric devices. Since the ferroelectric domain orientation is the most important parameter affecting the electromechanical performance, analyzing the domain orientation distribution is of great importance for the development and understanding of improved piezoceramic devices. Here, vector piezoresponse force microscopy (vector-PFM) has been applied in order to reconstruct the ferroelectric domain orientation distribution function of polished sections of device-ready polycrystalline lead zirconate titanate (PZT) material. A measurement procedure and a computer program based on the software Mathematica have been developed to automatically evaluate the vector-PFM data for reconstructing the domain orientation function. The method is tested on differently in-plane and out-of-plane poled PZT samples, and the results reveal the expected domain patterns and allow determination of the polarization orientation distribution function at high accuracy.
Compute Server Performance Results
NASA Technical Reports Server (NTRS)
Stockdale, I. E.; Barton, John; Woodrow, Thomas (Technical Monitor)
1994-01-01
Parallel-vector supercomputers have been the workhorses of high performance computing. As expectations of future computing needs have risen faster than projected vector supercomputer performance, much work has been done investigating the feasibility of using Massively Parallel Processor systems as supercomputers. An even more recent development is the availability of high performance workstations which have the potential, when clustered together, to replace parallel-vector systems. We present a systematic comparison of floating point performance and price-performance for various compute server systems. A suite of highly vectorized programs was run on systems including traditional vector systems such as the Cray C90, and RISC workstations such as the IBM RS/6000 590 and the SGI R8000. The C90 system delivers 460 million floating point operations per second (FLOPS), the highest single processor rate of any vendor. However, if the price-performance ration (PPR) is considered to be most important, then the IBM and SGI processors are superior to the C90 processors. Even without code tuning, the IBM and SGI PPR's of 260 and 220 FLOPS per dollar exceed the C90 PPR of 160 FLOPS per dollar when running our highly vectorized suite,
Adenoviral Vector Immunity: Its Implications and circumvention strategies
Ahi, Yadvinder S.; Bangari, Dinesh S.; Mittal, Suresh K.
2014-01-01
Adenoviral (Ad) vectors have emerged as a promising gene delivery platform for a variety of therapeutic and vaccine purposes during last two decades. However, the presence of preexisting Ad immunity and the rapid development of Ad vector immunity still pose significant challenges to the clinical use of these vectors. Innate inflammatory response following Ad vector administration may lead to systemic toxicity, drastically limit vector transduction efficiency and significantly abbreviate the duration of transgene expression. Currently, a number of approaches are being extensively pursued to overcome these drawbacks by strategies that target either the host or the Ad vector. In addition, significant progress has been made in the development of novel Ad vectors based on less prevalent human Ad serotypes and nonhuman Ad. This review provides an update on our current understanding of immune responses to Ad vectors and delineates various approaches for eluding Ad vector immunity. Approaches targeting the host and those targeting the vector are discussed in light of their promises and limitations. PMID:21453277
An integrated vector system for cellular studies of phage display-derived peptides.
Voss, Stephan D; DeGrand, Alec M; Romeo, Giulio R; Cantley, Lewis C; Frangioni, John V
2002-09-15
Peptide phage display is a method by which large numbers of diverse peptides can be screened for binding to a target of interest. Even when successful, the rate-limiting step is usually validation of peptide bioactivity using living cells. In this paper, we describe an integrated system of vectors that expedites both the screening and the characterization processes. Library construction and screening is performed using an optimized type 3 phage display vector, mJ(1), which is shown to accept peptide libraries of at least 23 amino acids in length. Peptide coding sequences are shuttled from mJ(1) into one of three families of mammalian expression vectors for cell physiological studies. The vector pAL(1) expresses phage display-derived peptides as Gal4 DNA binding domain fusion proteins for transcriptional activation studies. The vectors pG(1), pG(1)N, and pG(1)C express phage display-derived peptides as green fluorescent protein fusions targeted to the entire cell, nucleus, or cytoplasm, respectively. The vector pAP(1) expresses phage display-derived peptides as fusions to secreted placental alkaline phosphatase. Such enzyme fusions can be used as highly sensitive affinity reagents for high-throughput assays and for cloning of peptide-binding cell surface receptors. Taken together, this system of vectors should facilitate the development of phage display-derived peptides into useful biomolecules.
Teaching Vectors Through an Interactive Game Based Laboratory
NASA Astrophysics Data System (ADS)
O'Brien, James; Sirokman, Gergely
2014-03-01
In recent years, science and particularly physics education has been furthered by the use of project based interactive learning [1]. There is a tremendous amount of evidence [2] that use of these techniques in a college learning environment leads to a deeper appreciation and understanding of fundamental concepts. Since vectors are the basis for any advancement in physics and engineering courses the cornerstone of any physics regimen is a concrete and comprehensive introduction to vectors. Here, we introduce a new turn based vector game that we have developed to help supplement traditional vector learning practices, which allows students to be creative, work together as a team, and accomplish a goal through the understanding of basic vector concepts.
Range Sidelobe Response from the Use of Polyphase Signals in Spotlight Synthetic Aperture Radar
2015-12-01
come to closure. I also want to thank my mother for raising me and instilling in me the work ethic and values that have propelled me through life. I...to describe the poly-phase signals at baseband. IQ notation is preferred for complex waveforms because it allows for an easy mathematical...variables. 15 Once the Frank-coded phase vector is created, the IQ signal generation discussed in Chapter II was used to generate a Frank-code phase
1981-01-01
Channel and study permutation codes as a special case. ,uch a code is generated by an initial vector x, a group G of orthogonal n by n matrices, and a...random-access components, is introduced and studied . Under this scheme, the network stations are divided into groups , each of which is assigned a...IEEE INFORMATION THEORY GROUP CO-SPONSORED BY: UNION RADIO SCIENTIFIQUE INTERNATIONALE IEEE Catalog Number 81 CH 1609-7 IT . 81 ~20 04Q SECURITY
Particle-gas dynamics in the protoplanetary nebula
NASA Technical Reports Server (NTRS)
Cuzzi, Jeffrey N.; Champney, Joelle M.; Dobrovolskis, Anthony R.
1991-01-01
In the past year we made significant progress in improving our fundamental understanding of the physics of particle-gas dynamics in the protoplanetary nebula. Having brought our code to a state of fairly robust functionality, we devoted significant effort to optimizing it for running long cases. We optimized the code for vectorization to the extent that it now runs eight times faster than before. The following subject areas are covered: physical improvements to the model; numerical results; Reynolds averaging of fluid equations; and modeling of turbulence and viscosity.
RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade
2014-09-30
Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the
[Construction and expression of recombinant lentiviral vectors of AKT2,PDK1 and BAD].
Zhu, Jing; Chen, Bo-Jiang; Huang, Na; Li, Wei-Min
2014-03-01
To construct human protein kinase B (ATK2), phosphoinositide-dependent kinase 1 (PDK1) and bcl-2-associated death protein (BAD) lentiviral expression vector, and to determine their expressions in 293T cells. Total RNA was extracted from lung cancer tissues. The full-length coding regions of human ATK2, BAD and PDK1 cDNA were amplified via RT-PCR using specific primers, subcloned into PGEM-Teasy and then sequenced for confirmation. The full-length coding sequence was cut out with a specific restriction enzyme digest and subclone into pCDF1-MCS2-EF1-copGFP. The plasmids were transfected into 293T cells using the calcium phosphate method. The over expression of AKT2, BAD and PDK1 were detected by Western blot. AKT2, PDK1 and BAD were subcloned into pCDF1-MCS2-EF1-copGFP, with an efficiency of transfection of 100%, 95%, and 90% respectively. The virus titers were 6.7 x 10(6) PFU/mL in the supernatant. After infection, the proteins of AKT2, PDK1 and BAD were detected by Western blot. The lentivial vector pCDF1-MCS2-EF1-copGFP containing AKT2, BAD and PDK1 were successfully constructed and expressed in 293T cells.
Generalized vector calculus on convex domain
NASA Astrophysics Data System (ADS)
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
Ono, Motoharu; Yamada, Kayo; Avolio, Fabio; Afzal, Vackar; Bensaddek, Dalila; Lamond, Angus I
2015-01-01
We have previously reported an antisense technology, 'snoMEN vectors', for targeted knock-down of protein coding mRNAs using human snoRNAs manipulated to contain short regions of sequence complementarity with the mRNA target. Here we characterise the use of snoMEN vectors to target the knock-down of micro RNA primary transcripts. We document the specific knock-down of miR21 in HeLa cells using plasmid vectors expressing miR21-targeted snoMEN RNAs and show this induces apoptosis. Knock-down is dependent on the presence of complementary sequences in the snoMEN vector and the induction of apoptosis can be suppressed by over-expression of miR21. Furthermore, we have also developed lentiviral vectors for delivery of snoMEN RNAs and show this increases the efficiency of vector transduction in many human cell lines that are difficult to transfect with plasmid vectors. Transduction of lentiviral vectors expressing snoMEN targeted to pri-miR21 induces apoptosis in human lung adenocarcinoma cells, which express high levels of miR21, but not in human primary cells. We show that snoMEN-mediated suppression of miRNA expression is prevented by siRNA knock-down of Ago2, but not by knock-down of Ago1 or Upf1. snoMEN RNAs colocalise with Ago2 in cell nuclei and nucleoli and can be co-immunoprecipitated from nuclear extracts by antibodies specific for Ago2.
Identification of spilled oils by NIR spectroscopy technology based on KPCA and LSSVM
NASA Astrophysics Data System (ADS)
Tan, Ailing; Bi, Weihong
2011-08-01
Oil spills on the sea surface are seen relatively often with the development of the petroleum exploitation and transportation of the sea. Oil spills are great threat to the marine environment and the ecosystem, thus the oil pollution in the ocean becomes an urgent topic in the environmental protection. To develop the oil spill accident treatment program and track the source of the spilled oils, a novel qualitative identification method combined Kernel Principal Component Analysis (KPCA) and Least Square Support Vector Machine (LSSVM) was proposed. The proposed method adapt Fourier transform NIR spectrophotometer to collect the NIR spectral data of simulated gasoline, diesel fuel and kerosene oil spills samples and do some pretreatments to the original spectrum. We use the KPCA algorithm which is an extension of Principal Component Analysis (PCA) using techniques of kernel methods to extract nonlinear features of the preprocessed spectrum. Support Vector Machines (SVM) is a powerful methodology for solving spectral classification tasks in chemometrics. LSSVM are reformulations to the standard SVMs which lead to solving a system of linear equations. So a LSSVM multiclass classification model was designed which using Error Correcting Output Code (ECOC) method borrowing the idea of error correcting codes used for correcting bit errors in transmission channels. The most common and reliable approach to parameter selection is to decide on parameter ranges, and to then do a grid search over the parameter space to find the optimal model parameters. To test the proposed method, 375 spilled oil samples of unknown type were selected to study. The optimal model has the best identification capabilities with the accuracy of 97.8%. Experimental results show that the proposed KPCA plus LSSVM qualitative analysis method of near infrared spectroscopy has good recognition result, which could work as a new method for rapid identification of spilled oils.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.
Automatically Preparing Safe SQL Queries
NASA Astrophysics Data System (ADS)
Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.
We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.
3D face recognition based on multiple keypoint descriptors and sparse representation.
Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei
2014-01-01
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.
SSM/I and ECMWF Wind Vector Comparison
NASA Technical Reports Server (NTRS)
Wentz, Frank J.; Ashcroft, Peter D.
1996-01-01
Wentz was the first to convincingly show that satellite microwave radiometers have the potential to measure the oceanic wind vector. The most compelling evidence for this conclusion was the monthly wind vector maps derived solely from a statistical analysis of Special Sensor Microwave Imager (SSM/I) observations. In a qualitative sense, these maps clearly showed the general circulation over the world's oceans. In this report we take a closer look at the SSM/I monthly wind vector maps and compare them to European Center for Medium-Range Weather Forecasts (ECMWF) wind fields. This investigation leads both to an empirical comparison of SSM/I calculated wind vectors with ECMWF wind vectors, and to an examination of possible reasons that the SSM/I calculated wind vector direction would be inherently more reliable at some locations than others.
Optical simulation of a Popescu-Rohrlich Box
Chu, Wen-Jing; Zong, Xiao-Lan; Yang, Ming; Pan, Guo-Zhu; Cao, Zhuo-Liang
2016-01-01
It is well known that the fair-sampling loophole in Bell test opened by the selection of the state to be measured can lead to post-quantum correlations. In this paper, we make the selection of the results after measurement, which opens the fair- sampling loophole too, and thus can lead to post-quantum correlations. This kind of result-selection loophole can be realized by pre- and post-selection processes within the “two-state vector formalism”, and a physical simulation of Popescu-Rohrlich (PR) box is designed in linear optical system. The probability distribution of the PR has a maximal CHSH value 4, i.e. it can maximally violate CHSH inequality. Because the “two-state vector formalism” violates the information causality, it opens the locality loophole too, which means that this kind of results selection within “two-state vector formalism” leads to both fair- sampling loophole and locality loophole, so we call it a comprehensive loophole in Bell test. The comprehensive loophole opened by the results selection within “two-state vector formalism” may be another possible explanation of why post-quantum correlations are incompatible with quantum mechanics and seem not to exist in nature. PMID:27329203
Optical simulation of a Popescu-Rohrlich Box.
Chu, Wen-Jing; Zong, Xiao-Lan; Yang, Ming; Pan, Guo-Zhu; Cao, Zhuo-Liang
2016-06-22
It is well known that the fair-sampling loophole in Bell test opened by the selection of the state to be measured can lead to post-quantum correlations. In this paper, we make the selection of the results after measurement, which opens the fair- sampling loophole too, and thus can lead to post-quantum correlations. This kind of result-selection loophole can be realized by pre- and post-selection processes within the "two-state vector formalism", and a physical simulation of Popescu-Rohrlich (PR) box is designed in linear optical system. The probability distribution of the PR has a maximal CHSH value 4, i.e. it can maximally violate CHSH inequality. Because the "two-state vector formalism" violates the information causality, it opens the locality loophole too, which means that this kind of results selection within "two-state vector formalism" leads to both fair- sampling loophole and locality loophole, so we call it a comprehensive loophole in Bell test. The comprehensive loophole opened by the results selection within "two-state vector formalism" may be another possible explanation of why post-quantum correlations are incompatible with quantum mechanics and seem not to exist in nature.
Development of apple latent spherical virus-based vaccines against three tospoviruses.
Taki, Ayano; Yamagishi, Noriko; Yoshikawa, Nobuyuki
2013-09-01
Apple latent spherical virus (ALSV) is characterized by its relatively broad host range, latency in most host plants, and ability to induce gene silencing in host plants. Herein, we focus on the above characteristic of ALSV and describe our development of ALSV vector vaccines against three tospoviruses, namely, Impatiens necrotic spot virus (INSV), Iris yellow spot virus (IYSV), and Tomato spotted wilt virus (TSWV). DNA fragments of 201 nt of three tospovirus S-RNAs (silencing suppressor (NSS) and nucleocapsid protein (N) coding regions for each tospovirus) were inserted into an ALSV-RNA2 vector to obtain six types of ALSV vector vaccines. Nicotiana benthamiana plants at the five-leaf stage were inoculated with each ALSV vector vaccine and challenged with the corresponding tospovirus species. Tospovirus-induced symptoms and tospovirus replication after challenge were significantly suppressed in plants preinoculated with all ALSV vector vaccines having the N region fragment, indicating that strong resistance was acquired after infection with ALSV vector vaccines. On the other hand, cross protection was not significant in plants preinoculated with ALSV vectors having the NSs region fragment. Similarly, inoculation with an ALSV-RNA1 vector having the N region fragment in the 3'-noncoding region, but not the NSs region fragment, induced cross protection, indicating that cross protection is via RNA silencing, not via the function of the protein derived from the N region fragment. Our approach, wherein ALSV vectors and selected target inserts are used, enables rapid establishment of ALSV vector vaccines against many pathogenic RNA viruses with known sequences. Copyright © 2013 Elsevier B.V. All rights reserved.
Sena-Esteves, Miguel; Saeki, Yoshinaga; Camp, Sara M.; Chiocca, E. Antonio; Breakefield, Xandra O.
1999-01-01
We report here on the development and characterization of a novel herpes simplex virus type 1 (HSV-1) amplicon-based vector system which takes advantage of the host range and retention properties of HSV–Epstein-Barr virus (EBV) hybrid amplicons to efficiently convert cells to retrovirus vector producer cells after single-step transduction. The retrovirus genes gag-pol and env (GPE) and retroviral vector sequences were modified to minimize sequence overlap and cloned into an HSV-EBV hybrid amplicon. Retrovirus expression cassettes were used to generate the HSV-EBV-retrovirus hybrid vectors, HERE and HERA, which code for the ecotropic and the amphotropic envelopes, respectively. Retrovirus vector sequences encoding lacZ were cloned downstream from the GPE expression unit. Transfection of 293T/17 cells with amplicon plasmids yielded retrovirus titers between 106 and 107 transducing units/ml, while infection of the same cells with amplicon vectors generated maximum titers 1 order of magnitude lower. Retrovirus titers were dependent on the extent of transduction by amplicon vectors for the same cell line, but different cell lines displayed varying capacities to produce retrovirus vectors even at the same transduction efficiencies. Infection of human and dog primary gliomas with this system resulted in the production of retrovirus vectors for more than 1 week and the long-term retention and increase in transgene activity over time in these cell populations. Although the efficiency of this system still has to be determined in vivo, many applications are foreseeable for this approach to gene delivery. PMID:10559361
NASA Astrophysics Data System (ADS)
Hauth, T.; Innocente and, V.; Piparo, D.
2012-12-01
The processing of data acquired by the CMS detector at LHC is carried out with an object-oriented C++ software framework: CMSSW. With the increasing luminosity delivered by the LHC, the treatment of recorded data requires extraordinary large computing resources, also in terms of CPU usage. A possible solution to cope with this task is the exploitation of the features offered by the latest microprocessor architectures. Modern CPUs present several vector units, the capacity of which is growing steadily with the introduction of new processor generations. Moreover, an increasing number of cores per die is offered by the main vendors, even on consumer hardware. Most recent C++ compilers provide facilities to take advantage of such innovations, either by explicit statements in the programs sources or automatically adapting the generated machine instructions to the available hardware, without the need of modifying the existing code base. Programming techniques to implement reconstruction algorithms and optimised data structures are presented, that aim to scalable vectorization and parallelization of the calculations. One of their features is the usage of new language features of the C++11 standard. Portions of the CMSSW framework are illustrated which have been found to be especially profitable for the application of vectorization and multi-threading techniques. Specific utility components have been developed to help vectorization and parallelization. They can easily become part of a larger common library. To conclude, careful measurements are described, which show the execution speedups achieved via vectorised and multi-threaded code in the context of CMSSW.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
An efficient and portable SIMD algorithm for charge/current deposition in Particle-In-Cell codes
NASA Astrophysics Data System (ADS)
Vincenti, H.; Lobet, M.; Lehe, R.; Sasanka, R.; Vay, J.-L.
2017-01-01
In current computer architectures, data movement (from die to network) is by far the most energy consuming part of an algorithm (≈ 20 pJ/word on-die to ≈10,000 pJ/word on the network). To increase memory locality at the hardware level and reduce energy consumption related to data movement, future exascale computers tend to use many-core processors on each compute nodes that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. As a consequence, Particle-In-Cell (PIC) codes will have to achieve good vectorization to fully take advantage of these upcoming architectures. In this paper, we present a new algorithm that allows for efficient and portable SIMD vectorization of current/charge deposition routines that are, along with the field gathering routines, among the most time consuming parts of the PIC algorithm. Our new algorithm uses a particular data structure that takes into account memory alignment constraints and avoids gather/scatter instructions that can significantly affect vectorization performances on current CPUs. The new algorithm was successfully implemented in the 3D skeleton PIC code PICSAR and tested on Haswell Xeon processors (AVX2-256 bits wide data registers). Results show a factor of × 2 to × 2.5 speed-up in double precision for particle shape factor of orders 1- 3. The new algorithm can be applied as is on future KNL (Knights Landing) architectures that will include AVX-512 instruction sets with 512 bits register lengths (8 doubles/16 singles).
User's manual for CBS3DS, version 1.0
NASA Astrophysics Data System (ADS)
Reddy, C. J.; Deshpande, M. D.
1995-10-01
CBS3DS is a computer code written in FORTRAN 77 to compute the backscattering radar cross section of cavity backed apertures in infinite ground plane and slots in thick infinite ground plane. CBS3DS implements the hybrid Finite Element Method (FEM) and Method of Moments (MoM) techniques. This code uses the tetrahedral elements, with vector edge basis functions for FEM in the volume of the cavity/slot and the triangular elements with the basis functions for MoM at the apertures. By virtue of FEM, this code can handle any arbitrarily shaped three-dimensional cavities filled with inhomogeneous lossy materials; due to MoM, the apertures can be of any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computer the code is intended to run.
Ribeiro, José M. C.; Schwarz, Alexandra; Francischetti, Ivo M. B.
2015-01-01
Saliva of blood-sucking arthropods contains a complex cocktail of pharmacologically active compounds that assists feeding by counteracting their hosts’ hemostatic and inflammatory reactions. Panstrongylus megistus (Burmeister) is an important vector of Chagas disease in South America, but despite its importance there is only one salivary protein sequence publicly deposited in GenBank. In the present work, we used Illumina technology to disclose and publicly deposit 3,703 coding sequences obtained from the assembly of >70 million reads. These sequences should assist proteomic experiments aimed at identifying pharmacologically active proteins and immunological markers of vector exposure. A supplemental file of the transcriptome and deducted protein sequences can be obtained from http://exon.niaid.nih.gov/transcriptome/P_megistus/Pmeg-web.xlsx. PMID:26334808
Object recognition of real targets using modelled SAR images
NASA Astrophysics Data System (ADS)
Zherdev, D. A.
2017-12-01
In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).
Assessment of Climate Change and Vector-borne Diseases in the United States
NASA Astrophysics Data System (ADS)
Monaghan, A. J.; Beard, C. B.; Eisen, R. J.; Barker, C. M.; Garofalo, J.; Hahn, M.; Hayden, M.; Ogden, N.; Schramm, P.
2016-12-01
Vector-borne diseases are illnesses that are transmitted by vectors, which include mosquitoes, ticks, and fleas. The seasonality, distribution, and prevalence of vector-borne diseases are influenced significantly by climate factors, primarily high and low temperature extremes and precipitation patterns. In this presentation we summarize key findings from Chapter 5 ("Vector-borne Diseases") of the recently published USGCRP Scientific Assessment of the Impacts of Climate Change on Human Health in the United States. Climate change is expected to alter geographic and seasonal distributions of vectors and vector-borne diseases, leading to earlier activity and northward range expansion of ticks capable of carrying the bacteria that cause Lyme disease and other pathogens, and influencing the distribution, abundance and prevalence of infection in mosquitoes that transmit West Nile virus and other pathogens. The emergence or reemergence of vector-borne pathogens is also likely.
Applications of Support Vector Machine (SVM) Learning in Cancer Genomics.
Huang, Shujun; Cai, Nianguang; Pacheco, Pedro Penzuti; Narrandes, Shavira; Wang, Yang; Xu, Wayne
2018-01-01
Machine learning with maximization (support) of separating margin (vector), called support vector machine (SVM) learning, is a powerful classification tool that has been used for cancer genomic classification or subtyping. Today, as advancements in high-throughput technologies lead to production of large amounts of genomic and epigenomic data, the classification feature of SVMs is expanding its use in cancer genomics, leading to the discovery of new biomarkers, new drug targets, and a better understanding of cancer driver genes. Herein we reviewed the recent progress of SVMs in cancer genomic studies. We intend to comprehend the strength of the SVM learning and its future perspective in cancer genomic applications. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Towers of generalized divisible quantum codes
NASA Astrophysics Data System (ADS)
Haah, Jeongwan
2018-04-01
A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.
Holographic implementation of a binary associative memory for improved recognition
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.
1998-03-01
Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.
NASA Technical Reports Server (NTRS)
Orzechowski, J. A.
1982-01-01
The CMC fluid mechanics program system was developed to transmit the theoretical evolution of finite element numerical solution methodology, applied to nonlinear field problems into a versatile computer code for comprehensive flow field analysis. A detailed view of the code from the standpoint of a computer programmer's use is presented. A system macroflow chart and detailed flow charts of several routines necessary to interact with a theoretican/user to modify the operation of this program are presented. All subroutines and details of usage, primarily for input and output routines are described. Integer and real scalars and a cross reference list denoting subroutine usage for these scalars are outlined. Entry points in dynamic storage vector IZ; the lengths of each vector accompanying the scalar definitions are described. A listing of the routines peculiar to the standard test case and a listing of the input deck and printout for this case are included.
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers
NASA Astrophysics Data System (ADS)
Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi
2017-10-01
Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.
Simulating The Prompt Electromagnetic Pulse In 3D Using Vector Spherical Harmonics
NASA Astrophysics Data System (ADS)
Friedman, Alex; Cohen, Bruce I.; Eng, Chester D.; Farmer, William A.; Grote, David P.; Kruger, Hans W.; Larson, David J.
2017-10-01
We describe a new, efficient code for simulating the prompt electromagnetic pulse. In SHEMP (``Spherical Harmonic EMP''), we extend to 3-D the methods pioneered in C. Longmire's CHAP code. The geomagnetic field and air density are consistent with CHAP's assumed spherical symmetry only for narrow domains of influence about the line of sight, limiting validity to very early times. Also, we seek to model inherently 3-D situations. In CHAP and our own CHAP-lite, the independent coordinates are r (the distance from the source) and τ = t-r/c; the pulse varies slowly with r at fixed τ, so a coarse radial grid suffices. We add non-spherically-symmetric physics via a vector spherical harmonic decomposition. For each (l,m) harmonic, the radial equation is similar to that in CHAP and CHAP-lite. We present our methodology and results on model problems. This work was performed under the auspices of the U.S. DOE by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Development of Plant Gene Vectors for Tissue-Specific Expression Using GFP as a Reporter Gene
NASA Technical Reports Server (NTRS)
Jackson, Jacquelyn; Egnin, Marceline; Xue, Qi-Han; Prakash, C. S.
1997-01-01
Reporter genes are widely employed in plant molecular biology research to analyze gene expression and to identify promoters. Gus (UidA) is currently the most popular reporter gene but its detection requires a destructive assay. The use of jellyfish green fluorescent protein (GFP) gene from Aequorea Victoria holds promise for noninvasive detection of in vivo gene expression. To study how various plant promoters are expressed in sweet potato (Ipomoea batatas), we are transcriptionally fusing the intron-modified (mGFP) or synthetic (modified for codon-usage) GFP coding regions to these promoters: double cauliflower mosaic virus 35S (CaMV 35S) with AMV translational enhancer, ubiquitin7-intron-ubiquitin coding region (ubi7-intron-UQ) and sporaminA. A few of these vectors have been constructed and introduced into E. coli DH5a and Agrobacterium tumefaciens EHA105. Transient expression studies are underway using protoplast-electroporation and particle bombardment of leaf tissues.
Swiercz, Miroslaw; Kochanowicz, Jan; Weigele, John; Hurst, Robert; Liebeskind, David S; Mariak, Zenon; Melhem, Elias R; Krejza, Jaroslaw
2008-01-01
To determine the performance of an artificial neural network in transcranial color-coded duplex sonography (TCCS) diagnosis of middle cerebral artery (MCA) spasm. TCCS was prospectively acquired within 2 h prior to routine cerebral angiography in 100 consecutive patients (54M:46F, median age 50 years). Angiographic MCA vasospasm was classified as mild (<25% of vessel caliber reduction), moderate (25-50%), or severe (>50%). A Learning Vector Quantization neural network classified MCA spasm based on TCCS peak-systolic, mean, and end-diastolic velocity data. During a four-class discrimination task, accurate classification by the network ranged from 64.9% to 72.3%, depending on the number of neurons in the Kohonen layer. Accurate classification of vasospasm ranged from 79.6% to 87.6%, with an accuracy of 84.7% to 92.1% for the detection of moderate-to-severe vasospasm. An artificial neural network may increase the accuracy of TCCS in diagnosis of MCA spasm.
Advanced Techniques for Scene Analysis
2010-06-01
robustness prefers a bigger intergration window to handle larger motions. The advantage of pyramidal implementation is that, while each motion vector dL...labeled SAR images. Now the previous algorithm leads to a more dedicated classifier for the particular target; however, our algorithm trades generality for...accuracy is traded for generality. 7.3.2 I-RELIEF Feature weighting transforms the original feature vector x into a new feature vector x′ by assigning each
Cereal transformation through particle bombardment
NASA Technical Reports Server (NTRS)
Casas, A. M.; Kononowicz, A. K.; Bressan, R. A.; Hasegawa, P. M.; Mitchell, C. A. (Principal Investigator)
1995-01-01
The review focuses on experiments that lead to stable transformation in cereals using microprojectile bombardment. The discussion of biological factors that affect transformation examines target tissues and vector systems for gene transfer. The vector systems include reporter genes, selectable markers, genes of agronomic interest, and vector constructions. Other topics include physical parameters that affect DNA delivery, selection of stably transformed cells and plant regeneration, and analysis of gene expression and transmission to the progeny.
NASA Astrophysics Data System (ADS)
Carvalho, F.; Gonçalves, V. P.; Navarra, F. S.; Spiering, D.
2018-04-01
Exclusive vector meson photoproduction associated with a leading baryon (B =n ,Δ+,Δ0 ) in p p and p A collisions at RHIC and LHC energies is investigated using the color dipole formalism and taking into account nonlinear effects in the QCD dynamics. In particular, we compute the cross sections for ρ , ϕ and J /Ψ production together with a Δ and compare the predictions with those obtained for a leading neutron. Our results show that the V +Δ cross section is almost 30% of the V +n one. Our results also show that a future experimental analysis of these processes is, in principle, feasible and can be useful to study leading particle production.
Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2017-10-01
Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.
NASA Technical Reports Server (NTRS)
OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)
1998-01-01
This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).
O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; ...
1995-01-01
Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less
Extensions and improvements on XTRAN3S
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Improvements to the XTRAN3S computer program are summarized. Work on this code, for steady and unsteady aerodynamic and aeroelastic analysis in the transonic flow regime has concentrated on the following areas: (1) Maintenance of the XTRAN3S code, including correction of errors, enhancement of operational capability, and installation on the Cray X-MP system; (2) Extension of the vectorization concepts in XTRAN3S to include additional areas of the code for improved execution speed; (3) Modification of the XTRAN3S algorithm for improved numerical stability for swept, tapered wing cases and improved computational efficiency; and (4) Extension of the wing-only version of XTRAN3S to include pylon and nacelle or external store capability.
Minimal Increase Network Coding for Dynamic Networks.
Zhang, Guoyin; Fan, Xu; Wu, Yanxia
2016-01-01
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery.
Minimal Increase Network Coding for Dynamic Networks
Wu, Yanxia
2016-01-01
Because of the mobility, computing power and changeable topology of dynamic networks, it is difficult for random linear network coding (RLNC) in static networks to satisfy the requirements of dynamic networks. To alleviate this problem, a minimal increase network coding (MINC) algorithm is proposed. By identifying the nonzero elements of an encoding vector, it selects blocks to be encoded on the basis of relationship between the nonzero elements that the controls changes in the degrees of the blocks; then, the encoding time is shortened in a dynamic network. The results of simulations show that, compared with existing encoding algorithms, the MINC algorithm provides reduced computational complexity of encoding and an increased probability of delivery. PMID:26867211
Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow
NASA Technical Reports Server (NTRS)
Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry
2005-01-01
Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.
Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme
NASA Astrophysics Data System (ADS)
Mielikainen, Jarno; Huang, Bormin; Huang, Allen
2014-05-01
The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.
Liu, Shan; Jackson, Andrew; Beloor, Jagadish; Kumar, Priti; Sutton, Richard E
2015-09-01
Despite nearly three decades of research, a safe and effective vaccine against human immunodeficiency virus type 1 (HIV-1) has yet to be achieved. More recently, the discovery of highly potent anti-gp160 broadly neutralizing antibodies (bNAbs) has garnered renewed interest in using antibody-based prophylactic and therapeutic approaches. Here, we encoded bNAbs in first-generation adenoviral (ADV) vectors, which have the distinctive features of a large coding capacity and ease of propagation. A single intramuscular injection of ADV-vectorized bNAbs in humanized mice generated high serum levels of bNAbs that provided protection against multiple repeated challenges with a high dose of HIV-1, prevented depletion of peripheral CD4(+) T cells, and reduced plasma viral loads to below detection limits. Our results suggest that ADV vectors may be a viable option for the prophylactic and perhaps therapeutic use of bNAbs in humans.
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1990-01-01
The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.
Medical and Transmission Vector Vocabulary Alignment with Schema.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, William P.; Chappell, Alan R.; Corley, Courtney D.
Available biomedical ontologies and knowledge bases currently lack formal and standards-based interconnections between disease, disease vector, and drug treatment vocabularies. The PNNL Medical Linked Dataset (PNNL-MLD) addresses this gap. This paper describes the PNNL-MLD, which provides a unified vocabulary and dataset of drug, disease, side effect, and vector transmission background information. Currently, the PNNL-MLD combines and curates data from the following research projects: DrugBank, DailyMed, Diseasome, DisGeNet, Wikipedia Infobox, Sider, and PharmGKB. The main outcomes of this effort are a dataset aligned to Schema.org, including a parsing framework, and extensible hooks ready for integration with selected medical ontologies. The PNNL-MLDmore » enables researchers more quickly and easily to query distinct datasets. Future extensions to the PNNL-MLD will include Traditional Chinese Medicine, broader interlinks across genetic structures, a larger thesaurus of synonyms and hypernyms, explicit coding of diseases and drugs across research systems, and incorporating vector-borne transmission vocabularies.« less
NASA Astrophysics Data System (ADS)
Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.
1999-12-01
The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.
Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.
Selvaraj, Lokesh; Ganesan, Balakrishnan
2014-01-01
Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.
Deep Learning for Automated Extraction of Primary Sites From Cancer Pathology Reports.
Qiu, John X; Yoon, Hong-Jun; Fearn, Paul A; Tourassi, Georgia D
2018-01-01
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.
Towards a next generation open-source video codec
NASA Astrophysics Data System (ADS)
Bankoski, Jim; Bultje, Ronald S.; Grange, Adrian; Gu, Qunshan; Han, Jingning; Koleszar, John; Mukherjee, Debargha; Wilkins, Paul; Xu, Yaowu
2013-02-01
Google has recently been developing a next generation opensource video codec called VP9, as part of the experimental branch of the libvpx repository included in the WebM project (http://www.webmproject.org/). Starting from the VP8 video codec released by Google in 2010 as the baseline, a number of enhancements and new tools have been added to improve the coding efficiency. This paper provides a technical overview of the current status of this project along with comparisons and other stateoftheart video codecs H. 264/AVC and HEVC. The new tools that have been added so far include: larger prediction block sizes up to 64x64, various forms of compound INTER prediction, more modes for INTRA prediction, ⅛pel motion vectors and 8tap switchable subpel interpolation filters, improved motion reference generation and motion vector coding, improved entropy coding and framelevel entropy adaptation for various symbols, improved loop filtering, incorporation of Asymmetric Discrete Sine Transforms and larger 16x16 and 32x32 DCTs, frame level segmentation to group similar areas together, etc. Other tools and various bitstream features are being actively worked on as well. The VP9 bitstream is expected to be finalized by earlyto mid2013. Results show VP9 to be quite competitive in performance with mainstream stateoftheart codecs.
NASA Technical Reports Server (NTRS)
1975-01-01
NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.
Support vector machine multiuser receiver for DS-CDMA signals in multipath channels.
Chen, S; Samingan, A K; Hanzo, L
2001-01-01
The problem of constructing an adaptive multiuser detector (MUD) is considered for direct sequence code division multiple access (DS-CDMA) signals transmitted through multipath channels. The emerging learning technique, called support vector machines (SVM), is proposed as a method of obtaining a nonlinear MUD from a relatively small training data block. Computer simulation is used to study this SVM MUD, and the results show that it can closely match the performance of the optimal Bayesian one-shot detector. Comparisons with an adaptive radial basis function (RBF) MUD trained by an unsupervised clustering algorithm are discussed.
1984-12-01
Octol** explosive. The experimental charges were lightly confined with aluminum bodies and had cone diameters of 84mm. The charges modelled using HEMP...solved using the following relationships: .Final Final V 0 1 IV sin 9, where Voz aj i teailcmpnn fj~Fnl n Final F where V is the adial component of Fnan h...velocity vector is equal to the vector addition of the flow and 8. MMiles L. Lampson, "The Influence of Convergence - Velocity Gradients on the Formation
Ultra-low background DNA cloning system.
Goto, Kenta; Nagano, Yukio
2013-01-01
Yeast-based in vivo cloning is useful for cloning DNA fragments into plasmid vectors and is based on the ability of yeast to recombine the DNA fragments by homologous recombination. Although this method is efficient, it produces some by-products. We have developed an "ultra-low background DNA cloning system" on the basis of yeast-based in vivo cloning, by almost completely eliminating the generation of by-products and applying the method to commonly used Escherichia coli vectors, particularly those lacking yeast replication origins and carrying an ampicillin resistance gene (Amp(r)). First, we constructed a conversion cassette containing the DNA sequences in the following order: an Amp(r) 5' UTR (untranslated region) and coding region, an autonomous replication sequence and a centromere sequence from yeast, a TRP1 yeast selectable marker, and an Amp(r) 3' UTR. This cassette allowed conversion of the Amp(r)-containing vector into the yeast/E. coli shuttle vector through use of the Amp(r) sequence by homologous recombination. Furthermore, simultaneous transformation of the desired DNA fragment into yeast allowed cloning of this DNA fragment into the same vector. We rescued the plasmid vectors from all yeast transformants, and by-products containing the E. coli replication origin disappeared. Next, the rescued vectors were transformed into E. coli and the by-products containing the yeast replication origin disappeared. Thus, our method used yeast- and E. coli-specific "origins of replication" to eliminate the generation of by-products. Finally, we successfully cloned the DNA fragment into the vector with almost 100% efficiency.
Vector platforms for gene therapy of inherited retinopathies
Trapani, Ivana; Puppo, Agostina; Auricchio, Alberto
2014-01-01
Inherited retinopathies (IR) are common untreatable blinding conditions. Most of them are inherited as monogenic disorders, due to mutations in genes expressed in retinal photoreceptors (PR) and in retinal pigment epithelium (RPE). The retina’s compatibility with gene transfer has made transduction of different retinal cell layers in small and large animal models via viral and non-viral vectors possible. The ongoing identification of novel viruses as well as modifications of existing ones based either on rational design or directed evolution have generated vector variants with improved transduction properties. Dozens of promising proofs of concept have been obtained in IR animal models with both viral and non-viral vectors, and some of them have been relayed to clinical trials. To date, recombinant vectors based on the adeno-associated virus (AAV) represent the most promising tool for retinal gene therapy, given their ability to efficiently deliver therapeutic genes to both PR and RPE and their excellent safety and efficacy profiles in humans. However, AAVs’ limited cargo capacity has prevented application of the viral vector to treatments requiring transfer of genes with a coding sequence larger than 5 kb. Vectors with larger capacity, i.e. nanoparticles, adenoviral and lentiviral vectors are being exploited for gene transfer to the retina in animal models and, more recently, in humans. This review focuses on the available platforms for retinal gene therapy to fight inherited blindness, highlights their main strengths and examines the efforts to overcome some of their limitations. PMID:25124745
A novel protocol for the production of recombinant LL-37 expressed as a thioredoxin fusion protein.
Li, Yifeng
2012-02-01
LL-37 is the only cathelicidin-derived antimicrobial peptide found in humans and it has a multifunctional role in host defense. The peptide has been shown to possess immunomodulatory functions in addition to antimicrobial activity. To provide sufficient material for biological and structural characterization of this important peptide, various systems were developed to produce recombinant LL-37 in Escherichia coli. In one previous approach, LL-37 coding sequence was cloned into vector pET-32a, allowing the peptide to be expressed as a thioredoxin fusion. The fusion protein contains two thrombin cleavage sites: a vector-encoded one that is 30-residue upstream of the insert and an engineered one that is immediately adjacent to LL-37. Cleavage at these two sites shall generate three fragments, one of which is the target peptide. However, when the fusion protein was treated with thrombin, cleavage only occurred at the remote upstream site. A plausible explanation is that the thrombin site adjacent to LL-37 is less accessible due to the peptide's aggregation tendency and cleavage at the remote site generates a fragment, which forms a large aggregate that buries the intended site. In this study, I deleted the vector-encoded thrombin site and S tag in pET-32a, and then inserted the coding sequence for LL-37 plus a thrombin site into the modified vector. Although removing the S tag did not change the oligomeric state of the fusion protein, deletion of the vector-encoded thrombin site allowed the fusion to be cleaved at the engineered site to release LL-37. The released peptide was separated from the carrier and cleavage enzyme by size-exclusion chromatography. This new approach enables a quick production of high quality active LL-37 with a decent amount. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rogowski, Steve
1982-01-01
A problem is detailed which has a solution that embodies geometry, trigonometry, ballistics, projectile mechanics, vector analysis, and elementary computer graphics. It is felt that the information and sample computer programs can be a useful starting point for a user written code that involves missiles and other projectiles. (MP)
Bayesian Analogy with Relational Transformations
ERIC Educational Resources Information Center
Lu, Hongjing; Chen, Dawn; Holyoak, Keith J.
2012-01-01
How can humans acquire relational representations that enable analogical inference and other forms of high-level reasoning? Using comparative relations as a model domain, we explore the possibility that bottom-up learning mechanisms applied to objects coded as feature vectors can yield representations of relations sufficient to solve analogy…
A Semi-Vectorization Algorithm to Synthesis of Gravitational Anomaly Quantities on the Earth
NASA Astrophysics Data System (ADS)
Abdollahzadeh, M.; Eshagh, M.; Najafi Alamdari, M.
2009-04-01
The Earth's gravitational potential can be expressed by the well-known spherical harmonic expansion. The computational time of summing up this expansion is an important practical issue which can be reduced by an efficient numerical algorithm. This paper proposes such a method for block-wise synthesizing the anomaly quantities on the Earth surface using vectorization. Fully-vectorization means transformation of the summations to the simple matrix and vector products. It is not a practical for the matrices with large dimensions. Here a semi-vectorization algorithm is proposed to avoid working with large vectors and matrices. It speeds up the computations by using one loop for the summation either on degrees or on orders. The former is a good option to synthesize the anomaly quantities on the Earth surface considering a digital elevation model (DEM). This approach is more efficient than the two-step method which computes the quantities on the reference ellipsoid and continues them upward to the Earth surface. The algorithm has been coded in MATLAB which synthesizes a global grid of 5â²Ã- 5â² (corresponding 9 million points) of gravity anomaly or geoid height using a geopotential model to degree 360 in 10000 seconds by an ordinary computer with 2G RAM.
Wilson, Mandy L; Okumoto, Sakiko; Adam, Laura; Peccoud, Jean
2014-01-15
Expression vectors used in different biotechnology applications are designed with domain-specific rules. For instance, promoters, origins of replication or homologous recombination sites are host-specific. Similarly, chromosomal integration or viral delivery of an expression cassette imposes specific structural constraints. As de novo gene synthesis and synthetic biology methods permeate many biotechnology specialties, the design of application-specific expression vectors becomes the new norm. In this context, it is desirable to formalize vector design strategies applicable in different domains. Using the design of constructs to express genes in the chloroplast of Chlamydomonas reinhardtii as an example, we show that a vector design strategy can be formalized as a domain-specific language. We have developed a graphical editor of context-free grammars usable by biologists without prior exposure to language theory. This environment makes it possible for biologists to iteratively improve their design strategies throughout the course of a project. It is also possible to ensure that vectors designed with early iterations of the language are consistent with the latest iteration of the language. The context-free grammar editor is part of the GenoCAD application. A public instance of GenoCAD is available at http://www.genocad.org. GenoCAD source code is available from SourceForge and licensed under the Apache v2.0 open source license.
Navier-Stokes calculations for DFVLR F5-wing in wind tunnel using Runge-Kutta time-stepping scheme
NASA Technical Reports Server (NTRS)
Vatsa, V. N.; Wedan, B. W.
1988-01-01
A three-dimensional Navier-Stokes code using an explicit multistage Runge-Kutta type of time-stepping scheme is used for solving the transonic flow past a finite wing mounted inside a wind tunnel. Flow past the same wing in free air was also computed to assess the effect of wind-tunnel walls on such flows. Numerical efficiency is enhanced through vectorization of the computer code. A Cyber 205 computer with 32 million words of internal memory was used for these computations.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Additional development of the XTRAN3S computer program
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.
NASA Technical Reports Server (NTRS)
Kumar, A.; Graves, R. A., Jr.
1980-01-01
A user's guide is provided for a computer code which calculates the laminar and turbulent hypersonic flows about blunt axisymmetric bodies, such as spherically blunted cones, hyperboloids, etc., at zero and small angles of attack. The code is written in STAR FORTRAN language for the CDC-STAR-100 computer. Time-dependent, viscous-shock-layer-type equations are used to describe the flow field. These equations are solved by an explicit, two-step, time asymptotic, finite-difference method. For the turbulent flow, a two-layer, eddy-viscosity model is used. The code provides complete flow-field properties including shock location, surface pressure distribution, surface heating rates, and skin-friction coefficients. This report contains descriptions of the input and output, the listing of the program, and a sample flow-field solution.
NASA Astrophysics Data System (ADS)
Lahaye, S.; Huynh, T. D.; Tsilanizara, A.
2016-03-01
Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.
Bobrova, E V; Liakhovetskiĭ, V A; Borshchevskaia, E R
2011-01-01
The dependence of errors during reproduction of a sequence of hand movements without visual feedback on the previous right- and left-hand performance ("prehistory") and on positions in space of sequence elements (random or ordered by the explicit rule) was analyzed. It was shown that the preceding information about the ordered positions of the sequence elements was used during right-hand movements, whereas left-hand movements were performed with involvement of the information about the random sequence. The data testify to a central mechanism of the analysis of spatial structure of sequence elements. This mechanism activates movement coding specific for the left hemisphere (vector coding) in case of an ordered sequence structure and positional coding specific for the right hemisphere in case of a random sequence structure.
Attitude Control for an Aero-Vehicle Using Vector Thrusting and Variable Speed Control Moment Gyros
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Lim, K. B.; Moerder, D. D.
2005-01-01
Stabilization of passively unstable thrust-levitated vehicles can require significant control inputs. Although thrust vectoring is a straightforward choice for realizing these inputs, this may lead to difficulties discussed in the paper. This paper examines supplementing thrust vectoring with Variable-Speed Control Moment Gyroscopes (VSCMGs). The paper describes how to allocate VSCMGs and the vectored thrust mechanism for attitude stabilization in frequency domain and also shows trade-off between vectored thrust and VSCMGs. Using an H2 control synthesis methodology in LMI optimization, a feedback control law is designed for a thrust-levitated research vehicle and is simulated with the full nonlinear model. It is demonstrated that VSCMGs can reduce the use of vectored thrust variation for stabilizing the hovering platform in the presence of strong wind gusts.
Artificial Lighting as a Vector Attractant and Cause of Disease Diffusion
Barghini, Alessandro; de Medeiros, Bruno A. S.
2010-01-01
Background Traditionally, epidemiologists have considered electrification to be a positive factor. In fact, electrification and plumbing are typical initiatives that represent the integration of an isolated population into modern society, ensuring the control of pathogens and promoting public health. Nonetheless, electrification is always accompanied by night lighting that attracts insect vectors and changes people’s behavior. Although this may lead to new modes of infection and increased transmission of insect-borne diseases, epidemiologists rarely consider the role of night lighting in their surveys. Objective We reviewed the epidemiological evidence concerning the role of lighting in the spread of vector-borne diseases to encourage other researchers to consider it in future studies. Discussion We present three infectious vector-borne diseases—Chagas, leishmaniasis, and malaria—and discuss evidence that suggests that the use of artificial lighting results in behavioral changes among human populations and changes in the prevalence of vector species and in the modes of transmission. Conclusion Despite a surprising lack of studies, existing evidence supports our hypothesis that artificial lighting leads to a higher risk of infection from vector-borne diseases. We believe that this is related not only to the simple attraction of traditional vectors to light sources but also to changes in the behavior of both humans and insects that result in new modes of disease transmission. Considering the ongoing expansion of night lighting in developing countries, additional research on this subject is urgently needed. PMID:20675268
NASA Technical Reports Server (NTRS)
Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.
1998-01-01
This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.
The path toward HEP High Performance Computing
NASA Astrophysics Data System (ADS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-06-01
High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from the recent technology evolution in computing.
NASA Astrophysics Data System (ADS)
Bruni, Marco; Thomas, Daniel B.; Wands, David
2014-02-01
We present the first calculation of an intrinsically relativistic quantity, the leading-order correction to Newtonian theory, in fully nonlinear cosmological large-scale structure studies. Traditionally, nonlinear structure formation in standard ΛCDM cosmology is studied using N-body simulations, based on Newtonian gravitational dynamics on an expanding background. When one derives the Newtonian regime in a way that is a consistent approximation to the Einstein equations, the first relativistic correction to the usual Newtonian scalar potential is a gravitomagnetic vector potential, giving rise to frame dragging. At leading order, this vector potential does not affect the matter dynamics, thus it can be computed from Newtonian N-body simulations. We explain how we compute the vector potential from simulations in ΛCDM and examine its magnitude relative to the scalar potential, finding that the power spectrum of the vector potential is of the order 10-5 times the scalar power spectrum over the range of nonlinear scales we consider. On these scales the vector potential is up to two orders of magnitudes larger than the value predicted by second-order perturbation theory extrapolated to the same scales. We also discuss some possible observable effects and future developments.
Photon and vector meson exchanges in the production of light meson pairs and elementary atoms
NASA Astrophysics Data System (ADS)
Gevorkyan, S. R.; Kuraev, E. A.; Volkov, M. K.
2013-01-01
The production of pseudoscalar and scalar meson pairs ππ, ηη, η‧η‧, σσ as well as bound states in high energy γγ collisions are considered. The exchange by a vector particle in the binary process γ + γ → ha + hb with hadronic states ha, hb in fragmentation regions of the initial particle leads to nondecreasing cross sections with increasing energy, that is a priority of peripheral kinematics. Unlike the photon exchange the vector meson exchange needs a reggeization leading to fall with energy growth. Nevertheless, due to the peripheral kinematics beyond very forward production angles the vector meson exchanges dominate over all possible exchanges. The proposed approach allows one to express the matrix elements of the considered processes through impacting factors, which can be calculated in perturbation models like chiral perturbation theory (ChPT) or the Nambu-Jona-Lasinio (NJL) model. In particular cases the impact factors can be determined from relevant γγ sub-processes or the vector meson radiative decay width. The pionium atom production in the collisions of high energy electrons and pions with protons is considered and the relevant cross sections have been estimated.
Gain in computational efficiency by vectorization in the dynamic simulation of multi-body systems
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Shareef, N. H.
1991-01-01
An improved technique for the identification and extraction of the exact quantities associated with the degrees of freedom at the element as well as the flexible body level is presented. It is implemented in the dynamic equations of motions based on the recursive formulation of Kane et al. (1987) and presented in a matrix form, integrating the concepts of strain energy, the finite-element approach, modal analysis, and reduction of equations. This technique eliminates the CPU intensive matrix multiplication operations in the code's hot spots for the dynamic simulation of the interconnected rigid and flexible bodies. A study of a simple robot with flexible links is presented by comparing the execution times on a scalar machine and a vector-processor with and without vector options. Performance figures demonstrating the substantial gains achieved by the technique are plotted.
Levy, Michael Z.; Tustin, Aaron; Castillo-Neyra, Ricardo; Mabud, Tarub S.; Levy, Katelyn; Barbu, Corentin M.; Quispe-Machaca, Victor R.; Ancca-Juarez, Jenny; Borrini-Mayori, Katty; Naquira-Velarde, Cesar; Ostfeld, Richard S.
2015-01-01
Faeces-mediated transmission of Trypanosoma cruzi (the aetiological agent of Chagas disease) by triatomine insects is extremely inefficient. Still, the parasite emerges frequently, and has infected millions of people and domestic animals. We synthesize here the results of field and laboratory studies of T. cruzi transmission conducted in and around Arequipa, Peru. We document the repeated occurrence of large colonies of triatomine bugs (more than 1000) with very high infection prevalence (more than 85%). By inoculating guinea pigs, an important reservoir of T. cruzi in Peru, and feeding triatomine bugs on them weekly, we demonstrate that, while most animals quickly control parasitaemia, a subset of animals remains highly infectious to vectors for many months. However, we argue that the presence of these persistently infectious hosts is insufficient to explain the observed prevalence of T. cruzi in vector colonies. We posit that seasonal rains, leading to a fluctuation in the price of guinea pig food (alfalfa), leading to annual guinea pig roasts, leading to a concentration of vectors on a small subpopulation of animals maintained for reproduction, can propel T. cruzi through vector colonies and create a considerable force of infection for a pathogen whose transmission might otherwise fizzle out. PMID:26085582
Levy, Michael Z; Tustin, Aaron; Castillo-Neyra, Ricardo; Mabud, Tarub S; Levy, Katelyn; Barbu, Corentin M; Quispe-Machaca, Victor R; Ancca-Juarez, Jenny; Borrini-Mayori, Katty; Naquira-Velarde, Cesar; Ostfeld, Richard S
2015-07-07
Faeces-mediated transmission of Trypanosoma cruzi (the aetiological agent of Chagas disease) by triatomine insects is extremely inefficient. Still, the parasite emerges frequently, and has infected millions of people and domestic animals. We synthesize here the results of field and laboratory studies of T. cruzi transmission conducted in and around Arequipa, Peru. We document the repeated occurrence of large colonies of triatomine bugs (more than 1000) with very high infection prevalence (more than 85%). By inoculating guinea pigs, an important reservoir of T. cruzi in Peru, and feeding triatomine bugs on them weekly, we demonstrate that, while most animals quickly control parasitaemia, a subset of animals remains highly infectious to vectors for many months. However, we argue that the presence of these persistently infectious hosts is insufficient to explain the observed prevalence of T. cruzi in vector colonies. We posit that seasonal rains, leading to a fluctuation in the price of guinea pig food (alfalfa), leading to annual guinea pig roasts, leading to a concentration of vectors on a small subpopulation of animals maintained for reproduction, can propel T. cruzi through vector colonies and create a considerable force of infection for a pathogen whose transmission might otherwise fizzle out. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Context-Aware Local Binary Feature Learning for Face Recognition.
Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie
2018-05-01
In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.
A constrained joint source/channel coder design and vector quantization of nonstationary sources
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.
1993-01-01
The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.
NASA Astrophysics Data System (ADS)
Solano-Altamirano, J. M.; Hernández-Pérez, Julio M.
2015-11-01
DensToolKit is a suite of cross-platform, optionally parallelized, programs for analyzing the molecular electron density (ρ) and several fields derived from it. Scalar and vector fields, such as the gradient of the electron density (∇ρ), electron localization function (ELF) and its gradient, localized orbital locator (LOL), region of slow electrons (RoSE), reduced density gradient, localized electrons detector (LED), information entropy, molecular electrostatic potential, kinetic energy densities K and G, among others, can be evaluated on zero, one, two, and three dimensional grids. The suite includes a program for searching critical points and bond paths of the electron density, under the framework of Quantum Theory of Atoms in Molecules. DensToolKit also evaluates the momentum space electron density on spatial grids, and the reduced density matrix of order one along lines joining two arbitrary atoms of a molecule. The source code is distributed under the GNU-GPLv3 license, and we release the code with the intent of establishing an open-source collaborative project. The style of DensToolKit's code follows some of the guidelines of an object-oriented program. This allows us to supply the user with a simple manner for easily implement new scalar or vector fields, provided they are derived from any of the fields already implemented in the code. In this paper, we present some of the most salient features of the programs contained in the suite, some examples of how to run them, and the mathematical definitions of the implemented fields along with hints of how we optimized their evaluation. We benchmarked our suite against both a freely-available program and a commercial package. Speed-ups of ˜2×, and up to 12× were obtained using a non-parallel compilation of DensToolKit for the evaluation of fields. DensToolKit takes similar times for finding critical points, compared to a commercial package. Finally, we present some perspectives for the future development and growth of the suite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less
Vectoring of parallel synthetic jets
NASA Astrophysics Data System (ADS)
Berk, Tim; Ganapathisubramani, Bharathram; Gomit, Guillaume
2015-11-01
A pair of parallel synthetic jets can be vectored by applying a phase difference between the two driving signals. The resulting jet can be merged or bifurcated and either vectored towards the actuator leading in phase or the actuator lagging in phase. In the present study, the influence of phase difference and Strouhal number on the vectoring behaviour is examined experimentally. Phase-locked vorticity fields, measured using Particle Image Velocimetry (PIV), are used to track vortex pairs. The physical mechanisms that explain the diversity in vectoring behaviour are observed based on the vortex trajectories. For a fixed phase difference, the vectoring behaviour is shown to be primarily influenced by pinch-off time of vortex rings generated by the synthetic jets. Beyond a certain formation number, the pinch-off timescale becomes invariant. In this region, the vectoring behaviour is determined by the distance between subsequent vortex rings. We acknowledge the financial support from the European Research Council (ERC grant agreement no. 277472).
Ma, Wenqin; Li, Baozheng; Ling, Chen; Jayandharan, Giridhara R.; Byrne, Barry J.
2011-01-01
Abstract We have recently shown that co-administration of conventional single-stranded adeno-associated virus 2 (ssAAV2) vectors with self-complementary (sc) AAV2-protein phosphatase 5 (PP5) vectors leads to a significant increase in the transduction efficiency of ssAAV2 vectors in human cells in vitro as well as in murine hepatocytes in vivo. In the present study, this strategy has been further optimized by generating a mixed population of ssAAV2-EGFP and scAAV2-PP5 vectors at a 10:1 ratio to achieve enhanced green fluorescent protein (EGFP) transgene expression at approximately 5- to 10-fold higher efficiency, both in vitro and in vivo. This simple coproduction method should be adaptable to any ssAAV serotype vector containing transgene cassettes that are too large to be encapsidated in scAAV vectors. PMID:21219084
Structuring Stokes correlation functions using vector-vortex beam
NASA Astrophysics Data System (ADS)
Kumar, Vijay; Anwar, Ali; Singh, R. P.
2018-01-01
Higher order statistical correlations of the optical vector speckle field, formed due to scattering of a vector-vortex beam, are explored. Here, we report on the experimental construction of the Stokes parameters covariance matrix, consisting of all possible spatial Stokes parameters correlation functions. We also propose and experimentally realize a new Stokes correlation functions called Stokes field auto correlation functions. It is observed that the Stokes correlation functions of the vector-vortex beam will be reflected in the respective Stokes correlation functions of the corresponding vector speckle field. The major advantage of proposing Stokes correlation functions is that the Stokes correlation function can be easily tuned by manipulating the polarization of vector-vortex beam used to generate vector speckle field and to get the phase information directly from the intensity measurements. Moreover, this approach leads to a complete experimental Stokes characterization of a broad range of random fields.
Agrobacterium-mediated transformation of Easter lily (Lilium longiflorum cv. Nellie White)
USDA-ARS?s Scientific Manuscript database
Conditions were optimized for transient transformation of Lilium longiflorum cv. Nellie White using Agrobacterium tumefaciens. Bulb scale and basal meristem explants were inoculated with A. tumefaciens strain AGL1 containing the binary vector pCAMBIA 2301 which has the uidA gene that codes for ß-gl...
Dataflow Integration and Simulation Techniques for DSP System Design Tools
2007-01-01
Lebak, M. Richards , and D. Campbell, “VSIPL: An object-based open standard API for vector, signal, and image processing,” in Proceedings of the...Inc., document Version 0.98a. [56] P. Marwedel and G. Goossens , Eds., Code Generation for Embedded Processors. Kluwer Academic Publishers, 1995. [57
Planned Contrasts: An Overview of Comparison Methods.
ERIC Educational Resources Information Center
Chatham, Kathy
Contrasts or comparisons can be used to investigate specific differences between means. Contrasts, as explained by B. Thompson (1985, 1994) are coding vectors that mathematically express hypotheses. The most basic categories of contrasts are planned and unplanned. The purpose of this paper is to explain the relative advantages of using planned…
Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field
NASA Technical Reports Server (NTRS)
Ghosh, Sanjoy; Roberts, D. Aaron
2010-01-01
We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.
Acute olfactory response of Culex mosquitoes to a human- and bird-derived attractant
Syed, Zainulabeuddin; Leal, Walter S.
2009-01-01
West Nile virus, which is transmitted by Culex mosquitoes while feeding on birds and humans, has emerged as the dominant vector borne disease in North America. We have identified natural compounds from humans and birds, which are detected with extreme sensitivity by olfactory receptor neurons (ORNs) on the antennae of Culex pipiens quinquefasciatus (Cx. quinquefasciatus). One of these semiochemicals, nonanal, dominates the odorant spectrum of pigeons, chickens, and humans from various ethnic backgrounds. We determined the specificity and sensitivity of all ORN types housed in different sensilla types on Cx. quinquefasciatus antennae. Here, we present a comprehensive map of all antennal ORNs coding natural ligands and their dose-response functions. Nonanal is detected by a large array of sensilla and is by far the most potent stimulus; thus, supporting the assumption that Cx. quinquefasciatus can smell humans and birds. Nonanal and CO2 synergize, thus, leading to significantly higher catches of Culex mosquitoes in traps baited with binary than in those with individual lures. PMID:19858490
The design and implementation of a parallel unstructured Euler solver using software primitives
NASA Technical Reports Server (NTRS)
Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.
1992-01-01
This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.
A hadoop-based method to predict potential effective drug combination.
Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.
A Hadoop-Based Method to Predict Potential Effective Drug Combination
Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789
A path model for Whittaker vectors
NASA Astrophysics Data System (ADS)
Di Francesco, Philippe; Kedem, Rinat; Turmunkh, Bolor
2017-06-01
In this paper we construct weighted path models to compute Whittaker vectors in the completion of Verma modules, as well as Whittaker functions of fundamental type, for all finite-dimensional simple Lie algebras, affine Lie algebras, and the quantum algebra U_q(slr+1) . This leads to series expressions for the Whittaker functions. We show how this construction leads directly to the quantum Toda equations satisfied by these functions, and to the q-difference equations in the quantum case. We investigate the critical limit of affine Whittaker functions computed in this way.
Healthy, functioning aquatic ecosystems provide the ecosystem service of mosquito population control. Nutrient and pesticide pollution, along with destruction and filling of wetlands, lead to impaired waterbodies that are less effective in vector regulation due to reduction or re...
A performance comparison of the Cray-2 and the Cray X-MP
NASA Technical Reports Server (NTRS)
Schmickley, Ronald; Bailey, David H.
1986-01-01
A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.
User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
2000-01-01
PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
David, Marion; Lécorché, Pascaline; Masse, Maxime; Faucon, Aude; Abouzid, Karima; Gaudin, Nicolas; Varini, Karine; Gassiot, Fanny; Ferracci, Géraldine; Jacquot, Guillaume; Vlieghe, Patrick
2018-01-01
Insufficient membrane penetration of drugs, in particular biotherapeutics and/or low target specificity remain a major drawback in their efficacy. We propose here the rational characterization and optimization of peptides to be developed as vectors that target cells expressing specific receptors involved in endocytosis or transcytosis. Among receptors involved in receptor-mediated transport is the LDL receptor. Screening complex phage-displayed peptide libraries on the human LDLR (hLDLR) stably expressed in cell lines led to the characterization of a family of cyclic and linear peptides that specifically bind the hLDLR. The VH411 lead cyclic peptide allowed endocytosis of payloads such as the S-Tag peptide or antibodies into cells expressing the hLDLR. Size reduction and chemical optimization of this lead peptide-vector led to improved receptor affinity. The optimized peptide-vectors were successfully conjugated to cargos of different nature and size including small organic molecules, siRNAs, peptides or a protein moiety such as an Fc fragment. We show that in all cases, the peptide-vectors retain their binding affinity to the hLDLR and potential for endocytosis. Following i.v. administration in wild type or ldlr-/- mice, an Fc fragment chemically conjugated or fused in C-terminal to peptide-vectors showed significant biodistribution in LDLR-enriched organs. We have thus developed highly versatile peptide-vectors endowed with good affinity for the LDLR as a target receptor. These peptide-vectors have the potential to be further developed for efficient transport of therapeutic or imaging agents into cells -including pathological cells—or organs that express the LDLR. PMID:29485998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dale, R.; Sáez, D., E-mail: rdale@umh.es, E-mail: diego.saez@uv.es
The vector-tensor (VT) theory of gravitation revisited in this article was studied in previous papers, where it was proved that VT works and deserves attention. New observational data and numerical codes have motivated further development which is presented here. New research has been planed with the essential aim of proving that current cosmological observations, including Planck data, baryon acoustic oscillations (BAO), and so on, may be explained with VT, a theory which accounts for a kind of dark energy which has the same equation of state as vacuum. New versions of the codes CAMB and COSMOMC have been designed formore » applications to VT, and the resulting versions have been used to get the cosmological parameters of the VT model at suitable confidence levels. The parameters to be estimated are the same as in general relativity (GR), plus a new parameter D . For D = 0, VT linear cosmological perturbations reduces to those of GR, but the VT background may explain dark energy. The fits between observations and VT predictions lead to non vanishing | D | upper limits at the 1σ confidence level. The value D = 0 is admissible at this level, but this value is not that of the best fit in any case. Results strongly suggest that VT may explain current observations, at least, as well as GR; with the advantage that, as it is proved in this paper, VT has an additional parameter which facilitates adjustments to current observational data.« less
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
Schuller, D J; Fetter, C H; Banaszak, L J; Grant, G A
1989-02-15
The serA gene of Escherichia coli strain K-12, which codes for the cooperative allosteric enzyme D-3-phosphoglycerate dehydrogenase, was inserted into an inducible expression vector which produced phosphoglycerate dehydrogenase as 8% of the soluble protein of E. coli. The purified protein was used to grow several different single crystal forms. One of these, with space group P2(1), appears to contain all four subunits of the tetrameric enzyme in the asymmetric unit and diffracts to sufficient resolution to allow determination of the structure of phosphoglycerate dehydrogenase.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
Exact simulation of polarized light reflectance by particle deposits
NASA Astrophysics Data System (ADS)
Ramezan Pour, B.; Mackowski, D. W.
2015-12-01
The use of polarimetric light reflection measurements as a means of identifying the physical and chemical characteristics of particulate materials obviously relies on an accurate model of predicting the effects of particle size, shape, concentration, and refractive index on polarized reflection. The research examines two methods for prediction of reflection from plane parallel layers of wavelength—sized particles. The first method is based on an exact superposition solution to Maxwell's time harmonic wave equations for a deposit of spherical particles that are exposed to a plane incident wave. We use a FORTRAN-90 implementation of this solution (the Multiple Sphere T Matrix (MSTM) code), coupled with parallel computational platforms, to directly simulate the reflection from particle layers. The second method examined is based upon the vector radiative transport equation (RTE). Mie theory is used in our RTE model to predict the extinction coefficient, albedo, and scattering phase function of the particles, and the solution of the RTE is obtained from adding—doubling method applied to a plane—parallel configuration. Our results show that the MSTM and RTE predictions of the Mueller matrix elements converge when particle volume fraction in the particle layer decreases below around five percent. At higher volume fractions the RTE can yield results that, depending on the particle size and refractive index, significantly depart from the exact predictions. The particle regimes which lead to dependent scattering effects, and the application of methods to correct the vector RTE for particle interaction, will be discussed.
3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation
Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei
2014-01-01
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876
[Probable speciations by "host-vector 'fidelity'": 14 species of Plasmodium from magpies].
Chavatte, J M; Chiron, F; Chabaud, A; Landau, I
2007-03-01
33 Magpies resident in two parks close to Paris were investigated for the presence of Plasmodium parasites. The majority of the birds were found to be infected with multiple parasite species. A total of 14 species were observed, and of these 10 were novel and consequently described, and two could not be assigned with confidence. It is hypothesized that the unexpected abundance of species is due to a phenomenon which we term "host-vector 'fidelisation'". Indeed, the combination of the eco-biological characteristics of the host (mating pairs in contiguous, but strictly defined territories) with those of the vector (numerous Aedes species with distinct behavior), would generate fragmentation of the niches. This type of isolation overlays others known for parasitic populations (geographical, circadian, microlocalisations), leading to the formation of independent host-parasite niches which in turn lead to speciation.
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
NASA Technical Reports Server (NTRS)
1975-01-01
Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.
Using Grid Cells for Navigation
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-01-01
Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860
NASA Technical Reports Server (NTRS)
Reisine, H.; Simpson, J. I.; Henn, V.
1988-01-01
Experiments were carried out to determine anatomically the planes of the semicircular canals of two juvenile rhesus monkeys, using plastic casts of the semicircular canals, and the anatomical measurements were related to the directional coding of neural signals transmitted by primary afferents innervating the same simicircular canals. In the experiments, animals were prepared for monitoring the eye position by the implantation of silver-silver chloride electrodes into the bony orbit. Following the recording of semicircular canal afferent activity, the animals were sacrificed; plastic casting resin was injected into the bony canals; and, when the temporal bone was demineralized and removed, the coordinates of points spaced along the circumference of the canal casts were measured. A comparison of the sensitivity vectors determined in these experiments and the anatomical measures showed that the average difference between a sensitivity vector and its respective normal vector was 6.3 deg.
Automated Vectorization of Decision-Based Algorithms
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.
Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-02-01
Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.; ...
2017-05-03
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
Deep Learning for Automated Extraction of Primary Sites from Cancer Pathology Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, John; Yoon, Hong-Jun; Fearn, Paul A.
Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. Here in this study we investigated deep learning and a convolutional neural network (CNN), for extracting ICDO- 3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations asmore » the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro and macro-F score increases of up to 0.132 and 0.226 respectively when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on CNN method and cancer site. Finally, these encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.« less
A Computational Study of a New Dual Throat Fluidic Thrust Vectoring Nozzle Concept
NASA Technical Reports Server (NTRS)
Deere, Karen A.; Berrier, Bobby L.; Flamm, Jeffrey D.; Johnson, Stuart K.
2005-01-01
A computational investigation of a two-dimensional nozzle was completed to assess the use of fluidic injection to manipulate flow separation and cause thrust vectoring of the primary jet thrust. The nozzle was designed with a recessed cavity to enhance the throat shifting method of fluidic thrust vectoring. Several design cycles with the structured-grid, computational fluid dynamics code PAB3D and with experiments in the NASA Langley Research Center Jet Exit Test Facility have been completed to guide the nozzle design and analyze performance. This paper presents computational results on potential design improvements for best experimental configuration tested to date. Nozzle design variables included cavity divergence angle, cavity convergence angle and upstream throat height. Pulsed fluidic injection was also investigated for its ability to decrease mass flow requirements. Internal nozzle performance (wind-off conditions) and thrust vector angles were computed for several configurations over a range of nozzle pressure ratios from 2 to 7, with the fluidic injection flow rate equal to 3 percent of the primary flow rate. Computational results indicate that increasing cavity divergence angle beyond 10 is detrimental to thrust vectoring efficiency, while increasing cavity convergence angle from 20 to 30 improves thrust vectoring efficiency at nozzle pressure ratios greater than 2, albeit at the expense of discharge coefficient. Pulsed injection was no more efficient than steady injection for the Dual Throat Nozzle concept.
Ice Shape Characterization Using Self-Organizing Maps
NASA Technical Reports Server (NTRS)
McClain, Stephen T.; Tino, Peter; Kreeger, Richard E.
2011-01-01
A method for characterizing ice shapes using a self-organizing map (SOM) technique is presented. Self-organizing maps are neural-network techniques for representing noisy, multi-dimensional data aligned along a lower-dimensional and possibly nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. In information processing, the intent of SOM methods is to transmit the codebook vectors, which contains far fewer elements and requires much less memory or bandwidth, than the original noisy data set. When applied to airfoil ice accretion shapes, the properties of the codebook vectors and the statistical nature of the SOM methods allows for a quantitative comparison of experimentally measured mean or average ice shapes to ice shapes predicted using computer codes such as LEWICE. The nature of the codebook vectors also enables grid generation and surface roughness descriptions for use with the discrete-element roughness approach. In the present study, SOM characterizations are applied to a rime ice shape, a glaze ice shape at an angle of attack, a bi-modal glaze ice shape, and a multi-horn glaze ice shape. Improvements and future explorations will be discussed.
NASA Marshall Space Flight Center solar observatory report, January - June 1993
NASA Technical Reports Server (NTRS)
Smith, J. E.
1993-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during January-June 1993. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, July - October 1993
NASA Technical Reports Server (NTRS)
Smith, J. E.
1994-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during June-October 1993. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
NASA Marshall Space Flight Center Solar Observatory report, March - May 1994
NASA Technical Reports Server (NTRS)
Smith, J. E.
1994-01-01
This report provides a description of the NASA Marshall Space Flight Center's Solar Vector Magnetograph Facility and gives a summary of its observations and data reduction during March-May 1994. The systems that make up the facility are a magnetograph telescope, an H-alpha telescope, a Questar telescope, and a computer code.
Steens, Jennifer; Zuk, Melanie; Benchellal, Mohamed; Bornemann, Lea; Teichweyde, Nadine; Hess, Julia; Unger, Kristian; Görgens, André; Klump, Hannes; Klein, Diana
2017-04-11
The vascular wall (VW) serves as a niche for mesenchymal stem cells (MSCs). In general, tissue-specific stem cells differentiate mainly to the tissue type from which they derive, indicating that there is a certain code or priming within the cells as determined by the tissue of origin. Here we report the in vitro generation of VW-typical MSCs from induced pluripotent stem cells (iPSCs), based on a VW-MSC-specific gene code. Using a lentiviral vector expressing the so-called Yamanaka factors, we reprogrammed tail dermal fibroblasts from transgenic mice containing the GFP gene integrated into the Nestin-locus (NEST-iPSCs) to facilitate lineage tracing after subsequent MSC differentiation. A lentiviral vector expressing a small set of recently identified human VW-MSC-specific HOX genes then induced MSC differentiation. This direct programming approach successfully mediated the generation of VW-typical MSCs with classical MSC characteristics, both in vitro and in vivo. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
Anti-Epidermal Growth Factor Receptor Gene Therapy for Glioblastoma
Hicks, Martin J.; Chiuchiolo, Maria J.; Ballon, Douglas; Dyke, Jonathan P.; Aronowitz, Eric; Funato, Kosuke; Tabar, Viviane; Havlicek, David; Fan, Fan; Sondhi, Dolan; Kaminsky, Stephen M.; Crystal, Ronald G.
2016-01-01
Glioblastoma multiforme (GBM) is the most common and aggressive primary intracranial brain tumor in adults with a mean survival of 14 to 15 months. Aberrant activation of the epidermal growth factor receptor (EGFR) plays a significant role in GBM progression, with amplification or overexpression of EGFR in 60% of GBM tumors. To target EGFR expressed by GBM, we have developed a strategy to deliver the coding sequence for cetuximab, an anti-EGFR antibody, directly to the CNS using an adeno-associated virus serotype rh.10 gene transfer vector. The data demonstrates that single, local delivery of an anti-EGFR antibody by an AAVrh.10 vector coding for cetuximab (AAVrh.10Cetmab) reduces GBM tumor growth and increases survival in xenograft mouse models of a human GBM EGFR-expressing cell line and patient-derived GBM. AAVrh10.CetMab-treated mice displayed a reduction in cachexia, a significant decrease in tumor volume and a prolonged survival following therapy. Adeno-associated-directed delivery of a gene encoding a therapeutic anti-EGFR monoclonal antibody may be an effective strategy to treat GBM. PMID:27711187
Embedded 3D shape measurement system based on a novel spatio-temporal coding method
NASA Astrophysics Data System (ADS)
Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong
2016-11-01
Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.
T-cell receptor transfer into human T cells with ecotropic retroviral vectors.
Koste, L; Beissert, T; Hoff, H; Pretsch, L; Türeci, Ö; Sahin, U
2014-05-01
Adoptive T-cell transfer for cancer immunotherapy requires genetic modification of T cells with recombinant T-cell receptors (TCRs). Amphotropic retroviral vectors (RVs) used for TCR transduction for this purpose are considered safe in principle. Despite this, TCR-coding and packaging vectors could theoretically recombine to produce replication competent vectors (RCVs), and transduced T-cell preparations must be proven free of RCV. To eliminate the need for RCV testing, we transduced human T cells with ecotropic RVs so potential RCV would be non-infectious for human cells. We show that transfection of synthetic messenger RNA encoding murine cationic amino-acid transporter 1 (mCAT-1), the receptor for murine retroviruses, enables efficient transient ecotropic transduction of human T cells. mCAT-1-dependent transduction was more efficient than amphotropic transduction performed in parallel, and preferentially targeted naive T cells. Moreover, we demonstrate that ecotropic TCR transduction results in antigen-specific restimulation of primary human T cells. Thus, ecotropic RVs represent a versatile, safe and potent tool to prepare T cells for the adoptive transfer.
Solforosi, Laura; Mancini, Nicasio; Canducci, Filippo; Clementi, Nicola; Sautto, Giuseppe Andrea; Diotti, Roberta Antonia; Clementi, Massimo; Burioni, Roberto
2012-07-01
A novel phagemid vector, named pCM, was optimized for the cloning and display of antibody fragment (Fab) libraries on the surface of filamentous phage. This vector contains two long DNA "stuffer" fragments for easier differentiation of the correctly cut forms of the vector. Moreover, in pCM the fragment at the heavy-chain cloning site contains an acid phosphatase-encoding gene allowing an easy distinction of the Escherichia coli cells containing the unmodified form of the phagemid versus the heavy-chain fragment coding cDNA. In pCM transcription of heavy-chain Fd/gene III and light chain is driven by a single lacZ promoter. The light chain is directed to the periplasm by the ompA signal peptide, whereas the heavy-chain Fd/coat protein III is trafficked by the pelB signal peptide. The phagemid pCM was used to generate a human combinatorial phage display antibody library that allowed the selection of a monoclonal Fab fragment antibody directed against the nucleoprotein (NP) of Influenza A virus.
Modeling of Interactions of Ablated Plumes
2008-02-01
code was tested and verified using the Sedov-Taylor explosion problem 24. The grid 300 x 300 is used so as the single code run takes 30 minutes in a...still air and b) temperature contours along with the vector field for 20 km at t-10ps. 9 Final report AFOSR FA9550-07-1-0457 February 2008 0960014 09 C ow...ia Figue9FrainoIeodr shok wves a-)pesr otus0 )~22,bt4F,adetJ n d) het trnsferat te TPSw9PS 00 As 10 04 J. ’ Figure 9:Formation of secondary shock
NASA Technical Reports Server (NTRS)
Rhodes, J. A.; Tiwari, S. N.; Vonlavante, E.
1988-01-01
A comparison of flow separation in transonic flows is made using various computational schemes which solve the Euler and the Navier-Stokes equations of fluid mechanics. The flows examined are computed using several simple two-dimensional configurations including a backward facing step and a bump in a channel. Comparison of the results obtained using shock fitting and flux vector splitting methods are presented and the results obtained using the Euler codes are compared to results on the same configurations using a code which solves the Navier-Stokes equations.
Development of a CRAY 1 version of the SINDA program. [thermo-structural analyzer program
NASA Technical Reports Server (NTRS)
Juba, S. M.; Fogerson, P. E.
1982-01-01
The SINDA thermal analyzer program was transferred from the UNIVAC 1110 computer to a CYBER And then to a CRAY 1. Significant changes to the code of the program were required in order to execute efficiently on the CYBER and CRAY. The program was tested on the CRAY using a thermal math model of the shuttle which was too large to run on either the UNIVAC or CYBER. An effort was then begun to further modify the code of SINDA in order to make effective use of the vector capabilities of the CRAY.
NASA Astrophysics Data System (ADS)
Amaral, J. T.; Becker, V. M.
2018-05-01
We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.
Parallel Semi-Implicit Spectral Element Atmospheric Model
NASA Astrophysics Data System (ADS)
Fournier, A.; Thomas, S.; Loft, R.
2001-05-01
The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI solver is essential to substantially increase this rate. Parallel preconditioning for an iterative conjugate-gradient elliptic solver is described. We are building a GCM dycore capable of 200 GF% lOPS sustained performance on clustered RISC/cache architectures using hybrid MPI/OpenMP programming.
Fang, Jiansong; Yang, Ranyao; Gao, Li; Zhou, Dan; Yang, Shengqian; Liu, Ai-Lin; Du, Guan-hua
2013-11-25
Butyrylcholinesterase (BuChE, EC 3.1.1.8) is an important pharmacological target for Alzheimer's disease (AD) treatment. However, the currently available BuChE inhibitor screening assays are expensive, labor-intensive, and compound-dependent. It is necessary to develop robust in silico methods to predict the activities of BuChE inhibitors for the lead identification. In this investigation, support vector machine (SVM) models and naive Bayesian models were built to discriminate BuChE inhibitors (BuChEIs) from the noninhibitors. Each molecule was initially represented in 1870 structural descriptors (1235 from ADRIANA.Code, 334 from MOE, and 301 from Discovery studio). Correlation analysis and stepwise variable selection method were applied to figure out activity-related descriptors for prediction models. Additionally, structural fingerprint descriptors were added to improve the predictive ability of models, which were measured by cross-validation, a test set validation with 1001 compounds and an external test set validation with 317 diverse chemicals. The best two models gave Matthews correlation coefficient of 0.9551 and 0.9550 for the test set and 0.9132 and 0.9221 for the external test set. To demonstrate the practical applicability of the models in virtual screening, we screened an in-house data set with 3601 compounds, and 30 compounds were selected for further bioactivity assay. The assay results showed that 10 out of 30 compounds exerted significant BuChE inhibitory activities with IC50 values ranging from 0.32 to 22.22 μM, at which three new scaffolds as BuChE inhibitors were identified for the first time. To our best knowledge, this is the first report on BuChE inhibitors using machine learning approaches. The models generated from SVM and naive Bayesian approaches successfully predicted BuChE inhibitors. The study proved the feasibility of a new method for predicting bioactivities of ligands and discovering novel lead compounds.
Demonstration of a terahertz pure vector beam by tailoring geometric phase.
Wakayama, Toshitaka; Higashiguchi, Takeshi; Sakaue, Kazuyuki; Washio, Masakazu; Otani, Yukitoshi
2018-06-06
We demonstrate the creation of a vector beam by tailoring geometric phase of left- and right- circularly polarized beams. Such a vector beam with a uniform phase has not been demonstrated before because a vortex phase remains in the beam. We focus on vortex phase cancellation to generate vector beams in terahertz regions, and measure the geometric phase of the beam and its spatial distribution of polarization. We conduct proof-of-principle experiments for producing a vector beam with radial polarization and uniform phase at 0.36 THz. We determine the vortex phase of the vector beam to be below 4%, thus highlighting the extendibility and availability of the proposed concept to the super broadband spectral region from ultraviolet to terahertz. The extended range of our proposed techniques could lead to breakthroughs in the fields of microscopy, chiral nano-materials, and quantum information science.
Rate determination from vector observations
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.
1993-01-01
Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.
Multiclass Reduced-Set Support Vector Machines
NASA Technical Reports Server (NTRS)
Tang, Benyang; Mazzoni, Dominic
2006-01-01
There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.
Can different quantum state vectors correspond to the same physical state? An experimental test
NASA Astrophysics Data System (ADS)
Nigg, Daniel; Monz, Thomas; Schindler, Philipp; Martinez, Esteban A.; Hennrich, Markus; Blatt, Rainer; Pusey, Matthew F.; Rudolph, Terry; Barrett, Jonathan
2016-01-01
A century after the development of quantum theory, the interpretation of a quantum state is still discussed. If a physicist claims to have produced a system with a particular quantum state vector, does this represent directly a physical property of the system, or is the state vector merely a summary of the physicist’s information about the system? Assume that a state vector corresponds to a probability distribution over possible values of an unknown physical or ‘ontic’ state. Then, a recent no-go theorem shows that distinct state vectors with overlapping distributions lead to predictions different from quantum theory. We report an experimental test of these predictions using trapped ions. Within experimental error, the results confirm quantum theory. We analyse which kinds of models are ruled out.
Basáñez, María-Gloria; Razali, Karina; Renz, Alfons; Kelly, David
2007-03-01
The proportion of vector blood meals taken on humans (the human blood index, h) appears as a squared term in classical expressions of the basic reproduction ratio (R(0)) for vector-borne infections. Consequently, R(0) varies non-linearly with h. Estimates of h, however, constitute mere snapshots of a parameter that is predicted, from evolutionary theory, to vary with vector and host abundance. We test this prediction using a population dynamics model of river blindness assuming that, before initiation of vector control or chemotherapy, recorded measures of vector density and human infection accurately represent endemic equilibrium. We obtain values of h that satisfy the condition that the effective reproduction ratio (R(e)) must equal 1 at equilibrium. Values of h thus obtained decrease with vector density, decrease with the vector:human ratio and make R(0) respond non-linearly rather than increase linearly with vector density. We conclude that if vectors are less able to obtain human blood meals as their density increases, antivectorial measures may not lead to proportional reductions in R(0) until very low vector levels are achieved. Density dependence in the contact rate of infectious diseases transmitted by insects may be an important non-linear process with implications for their epidemiology and control.
Weak mixing below the weak scale in dark-matter direct detection
NASA Astrophysics Data System (ADS)
Brod, Joachim; Grinstein, Benjamin; Stamou, Emmanuel; Zupan, Jure
2018-02-01
If dark matter couples predominantly to the axial-vector currents with heavy quarks, the leading contribution to dark-matter scattering on nuclei is either due to one-loop weak corrections or due to the heavy-quark axial charges of the nucleons. We calculate the effects of Higgs and weak gauge-boson exchanges for dark matter coupling to heavy-quark axial-vector currents in an effective theory below the weak scale. By explicit computation, we show that the leading-logarithmic QCD corrections are important, and thus resum them to all orders using the renormalization group.