Sample records for linear prediction coding

  1. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  2. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  3. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  4. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  5. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is developed based on the Minimum Variance approach. This new algorithm is shown to be closely related to Linear Prediction but produces slightly broader spectral peaks. Spectrum squaring is applied to both the new algorithm and standard Linear Prediction and their relative performance is assessed. (Abstract shortened by UMI.).

  7. Pulse Vector-Excitation Speech Encoder

    NASA Technical Reports Server (NTRS)

    Davidson, Grant; Gersho, Allen

    1989-01-01

    Proposed pulse vector-excitation speech encoder (PVXC) encodes analog speech signals into digital representation for transmission or storage at rates below 5 kilobits per second. Produces high quality of reconstructed speech, but with less computation than required by comparable speech-encoding systems. Has some characteristics of multipulse linear predictive coding (MPLPC) and of code-excited linear prediction (CELP). System uses mathematical model of vocal tract in conjunction with set of excitation vectors and perceptually-based error criterion to synthesize natural-sounding speech.

  8. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  9. Extension of a nonlinear systems theory to general-frequency unsteady transonic aerodynamic responses

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1993-01-01

    A methodology for modeling nonlinear unsteady aerodynamic responses, for subsequent use in aeroservoelastic analysis and design, using the Volterra-Wiener theory of nonlinear systems is presented. The methodology is extended to predict nonlinear unsteady aerodynamic responses of arbitrary frequency. The Volterra-Wiener theory uses multidimensional convolution integrals to predict the response of nonlinear systems to arbitrary inputs. The CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code is used to generate linear and nonlinear unit impulse responses that correspond to each of the integrals for a rectangular wing with a NACA 0012 section with pitch and plunge degrees of freedom. The computed kernels then are used to predict linear and nonlinear unsteady aerodynamic responses via convolution and compared to responses obtained using the CAP-TSD code directly. The results indicate that the approach can be used to predict linear unsteady aerodynamic responses exactly for any input amplitude or frequency at a significant cost savings. Convolution of the nonlinear terms results in nonlinear unsteady aerodynamic responses that compare reasonably well with those computed using the CAP-TSD code directly but at significant computational cost savings.

  10. Modified linear predictive coding approach for moving target tracking by Doppler radar

    NASA Astrophysics Data System (ADS)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  11. Speech coding at low to medium bit rates

    NASA Astrophysics Data System (ADS)

    Leblanc, Wilfred Paul

    1992-09-01

    Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.

  12. Aeroelastic loads prediction for an arrow wing. Task 3: Evaluation of the Boeing three-dimensional leading-edge vortex code

    NASA Technical Reports Server (NTRS)

    Manro, M. E.

    1983-01-01

    Two separated flow computer programs and a semiempirical method for incorporating the experimentally measured separated flow effects into a linear aeroelastic analysis were evaluated. The three dimensional leading edge vortex (LEV) code is evaluated. This code is an improved panel method for three dimensional inviscid flow over a wing with leading edge vortex separation. The governing equations are the linear flow differential equation with nonlinear boundary conditions. The solution is iterative; the position as well as the strength of the vortex is determined. Cases for both full and partial span vortices were executed. The predicted pressures are good and adequately reflect changes in configuration.

  13. ASTROP2-LE: A Mistuned Aeroelastic Analysis System Based on a Two Dimensional Linearized Euler Solver

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Srivastava, R.; Mehmed, Oral

    2002-01-01

    An aeroelastic analysis system for flutter and forced response analysis of turbomachines based on a two-dimensional linearized unsteady Euler solver has been developed. The ASTROP2 code, an aeroelastic stability analysis program for turbomachinery, was used as a basis for this development. The ASTROP2 code uses strip theory to couple a two dimensional aerodynamic model with a three dimensional structural model. The code was modified to include forced response capability. The formulation was also modified to include aeroelastic analysis with mistuning. A linearized unsteady Euler solver, LINFLX2D is added to model the unsteady aerodynamics in ASTROP2. By calculating the unsteady aerodynamic loads using LINFLX2D, it is possible to include the effects of transonic flow on flutter and forced response in the analysis. The stability is inferred from an eigenvalue analysis. The revised code, ASTROP2-LE for ASTROP2 code using Linearized Euler aerodynamics, is validated by comparing the predictions with those obtained using linear unsteady aerodynamic solutions.

  14. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1997-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic responses of axial-flow turbo-machinery blading.The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to a far-field eigenanalysis, is also described. The linearized aerodynamic and numerical models have been implemented into a three-dimensional linearized unsteady flow code, called LINFLUX. This code has been applied to selected, benchmark, unsteady, subsonic flows to establish its accuracy and to demonstrate its current capabilities. The unsteady flows considered, have been chosen to allow convenient comparisons between the LINFLUX results and those of well-known, two-dimensional, unsteady flow codes. Detailed numerical results for a helical fan and a three-dimensional version of the 10th Standard Cascade indicate that important progress has been made towards the development of a reliable and useful, three-dimensional, prediction capability that can be used in aeroelastic and aeroacoustic design studies.

  15. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  16. Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit

    PubMed Central

    Bharioke, Arjun; Chklovskii, Dmitri B.

    2015-01-01

    Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884

  17. Can mutational GC-pressure create new linear B-cell epitopes in herpes simplex virus type 1 glycoprotein B?

    PubMed

    Khrustalev, Vladislav Victorovich

    2009-01-01

    We showed that GC-content of nucleotide sequences coding for linear B-cell epitopes of herpes simplex virus type 1 (HSV1) glycoprotein B (gB) is higher than GC-content of sequences coding for epitope-free regions of this glycoprotein (G + C = 73 and 64%, respectively). Linear B-cell epitopes have been predicted in HSV1 gB by BepiPred algorithm ( www.cbs.dtu.dk/services/BepiPred ). Proline is an acrophilic amino acid residue (it is usually situated on the surface of protein globules, and so included in linear B-cell epitopes). Indeed, the level of proline is much higher in predicted epitopes of gB than in epitope-free regions (17.8% versus 1.8%). This amino acid is coded by GC-rich codons (CCX) that can be produced due to nucleotide substitutions caused by mutational GC-pressure. GC-pressure will also lead to disappearance of acrophobic phenylalanine, isoleucine, methionine and tyrosine coded by GC-poor codons. Results of our "in-silico directed mutagenesis" showed that single nonsynonymous substitutions in AT to GC direction in two long epitope-free regions of gB will cause formation of new linear epitopes or elongation of previously existing epitopes flanking these regions in 25% of 539 possible cases. The calculations of GC-content and amino acid content have been performed by CodonChanges algorithm ( www.barkovsky.hotmail.ru ).

  18. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  19. Extending the Coyote emulator to dark energy models with standard w {sub 0}- w {sub a} parametrization of the equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casarini, L.; Bonometto, S.A.; Tessarotto, E.

    2016-08-01

    We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less

  20. Improved Speech Coding Based on Open-Loop Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.

    2000-01-01

    A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.

  1. Sensory Information Processing

    DTIC Science & Technology

    1975-12-31

    system noise . To see how this is avoided, note that zeroes in the blur spectrum become sharp, spike-like negative «*»• Page impulses when the...Synthetic Speech Quality Using Binaural Reverberation-- Boll 12 13 Section 4. Noise Suppression with Linear Prediction Filtering—Peterson 24 Section...5. Speech Processing to Reduce Noise and Improve Intelligibility— Callahan 28 Section 6. Linear Predictive Coding with a Glottal 36 Section 7

  2. Coding tools investigation for next generation video coding based on HEVC

    NASA Astrophysics Data System (ADS)

    Chen, Jianle; Chen, Ying; Karczewicz, Marta; Li, Xiang; Liu, Hongbin; Zhang, Li; Zhao, Xin

    2015-09-01

    The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.

  3. Development of a Linearized Unsteady Euler Analysis with Application to Wake/Blade-Row Interactions

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Montgomery, Matthew D.; Chuang, H. Andrew

    1999-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide a comprehensive and efficient unsteady aerodynamic analysis for predicting the aeroacoustic and aeroelastic responses of axial-flow turbomachinery blading. The mathematical models needed to describe nonlinear and linearized, inviscid, unsteady flows through a blade row operating within a cylindrical annular duct are presented in this report. A numerical model for linearized inviscid unsteady flows, which couples a near-field, implicit, wave-split, finite volume analysis to far-field eigen analyses, is also described. The linearized aerodynamic and numerical models have been implemented into the three-dimensional unsteady flow code, LINFLUX. This code is applied herein to predict unsteady subsonic flows driven by wake or vortical excitations. The intent is to validate the LINFLUX analysis via numerical results for simple benchmark unsteady flows and to demonstrate this analysis via application to a realistic wake/blade-row interaction. Detailed numerical results for a three-dimensional version of the 10th Standard Cascade and a fan exit guide vane indicate that LINFLUX is becoming a reliable and useful unsteady aerodynamic prediction capability that can be applied, in the future, to assess the three-dimensional flow physics important to blade-row, aeroacoustic and aeroelastic responses.

  4. Predictors of Quality Verbal Engagement in Third-Grade Literature Discussions

    ERIC Educational Resources Information Center

    Young, Chase

    2014-01-01

    This study investigates how reading ability and personality traits predict the quality of verbal discussions in peer-led literature circles. Third grade literature discussions were recorded, transcribed, and coded. The coded statements and questions were quantified into a quality of engagement score. Through multiple linear regression, the…

  5. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  6. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  7. The Langley Stability and Transition Analysis Code (LASTRAC) : LST, Linear and Nonlinear PSE for 2-D, Axisymmetric, and Infinite Swept Wing Boundary Layers

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2003-01-01

    During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary layers.

  8. A review of lossless audio compression standards and algorithms

    NASA Astrophysics Data System (ADS)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  9. Langley Stability and Transition Analysis Code (LASTRAC) Version 1.2 User Manual

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan

    2004-01-01

    LASTRAC is a general-purposed, physics-based transition prediction code released by NASA for Laminar Flow Control studies and transition research. The design and development of the LASTRAC code is aimed at providing an engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. It was written from scratch based on the state-of-the-art numerical methods for stability analysis and modern software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory or linear parabolized stability equations method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. This document describes the governing equations, numerical methods, code development, detailed description of input/output parameters, and case studies for the current release of LASTRAC.

  10. An evaluation of a computer code based on linear acoustic theory for predicting helicopter main rotor noise

    NASA Astrophysics Data System (ADS)

    Davis, S. J.; Egolf, T. A.

    1980-07-01

    Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.

  11. Linearized Aeroelastic Solver Applied to the Flutter Prediction of Real Configurations

    NASA Technical Reports Server (NTRS)

    Reddy, Tondapu S.; Bakhle, Milind A.

    2004-01-01

    A fast-running unsteady aerodynamics code, LINFLUX, was previously developed for predicting turbomachinery flutter. This linearized code, based on a frequency domain method, models the effects of steady blade loading through a nonlinear steady flow field. The LINFLUX code, which is 6 to 7 times faster than the corresponding nonlinear time domain code, is suitable for use in the initial design phase. Earlier, this code was verified through application to a research fan, and it was shown that the predictions of work per cycle and flutter compared well with those from a nonlinear time-marching aeroelastic code, TURBO-AE. Now, the LINFLUX code has been applied to real configurations: fans developed under the Energy Efficient Engine (E-cubed) Program and the Quiet Aircraft Technology (QAT) project. The LINFLUX code starts with a steady nonlinear aerodynamic flow field and solves the unsteady linearized Euler equations to calculate the unsteady aerodynamic forces on the turbomachinery blades. First, a steady aerodynamic solution is computed for given operating conditions using the nonlinear unsteady aerodynamic code TURBO-AE. A blade vibration analysis is done to determine the frequencies and mode shapes of the vibrating blades, and an interface code is used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor is used to interpolate the mode shapes from the structural dynamics mesh onto the computational fluid dynamics mesh. Then, LINFLUX is used to calculate the unsteady aerodynamic pressure distribution for a given vibration mode, frequency, and interblade phase angle. Finally, a post-processor uses the unsteady pressures to calculate the generalized aerodynamic forces, eigenvalues, an esponse amplitudes. The eigenvalues determine the flutter frequency and damping. Results of flutter calculations from the LINFLUX code are presented for (1) the E-cubed fan developed under the E-cubed program and (2) the Quiet High Speed Fan (QHSF) developed under the Quiet Aircraft Technology project. The results are compared with those obtained from the TURBO-AE code. A graph of the work done per vibration cycle for the first vibration mode of the E-cubed fan is shown. It can be seen that the LINFLUX results show a very good comparison with TURBO-AE results over the entire range of interblade phase angle. The work done per vibration cycle for the first vibration mode of the QHSF fan is shown. Once again, the LINFLUX results compare very well with the results from the TURBOAE code.

  12. Three-dimensional Navier-Stokes analysis of turbine passage heat transfer

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Arnone, Andrea

    1991-01-01

    The three-dimensional Reynolds-averaged Navier-Stokes equations are numerically solved to obtain the pressure distribution and heat transfer rates on the endwalls and the blades of two linear turbine cascades. The TRAF3D code which has recently been developed in a joint project between researchers from the University of Florence and NASA Lewis Research Center is used. The effect of turbulence is taken into account by using the eddy viscosity hypothesis and the two-layer mixing length model of Baldwin and Lomax. Predictions of surface heat transfer are made for Langston's cascade and compared with the data obtained for that cascade by Graziani. The comparison was found to be favorable. The code is also applied to a linear transonic rotor cascade to predict the pressure distributions and heat transfer rates.

  13. Aerodynamics of a linear oscillating cascade

    NASA Technical Reports Server (NTRS)

    Buffum, Daniel H.; Fleeter, Sanford

    1990-01-01

    The steady and unsteady aerodynamics of a linear oscillating cascade are investigated using experimental and computational methods. Experiments are performed to quantify the torsion mode oscillating cascade aerodynamics of the NASA Lewis Transonic Oscillating Cascade for subsonic inlet flowfields using two methods: simultaneous oscillation of all the cascaded airfoils at various values of interblade phase angle, and the unsteady aerodynamic influence coefficient technique. Analysis of these data and correlation with classical linearized unsteady aerodynamic analysis predictions indicate that the wind tunnel walls enclosing the cascade have, in some cases, a detrimental effect on the cascade unsteady aerodynamics. An Euler code for oscillating cascade aerodynamics is modified to incorporate improved upstream and downstream boundary conditions and also the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic influence coefficient technique. The new boundary conditions are shown to improve the unsteady aerodynamic predictions of the code, and the computational unsteady aerodynamic influence coefficient technique is shown to be a viable alternative for calculation of oscillating cascade aerodynamics.

  14. Anisotropic connectivity implements motion-based prediction in a spiking neural network.

    PubMed

    Kaplan, Bernhard A; Lansner, Anders; Masson, Guillaume S; Perrinet, Laurent U

    2013-01-01

    Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.

  15. An evaluation of a computer code based on linear acoustic theory for predicting helicopter main rotor noise. [CH-53A and S-76 helicopters

    NASA Technical Reports Server (NTRS)

    Davis, S. J.; Egolf, T. A.

    1980-01-01

    Acoustic characteristics predicted using a recently developed computer code were correlated with measured acoustic data for two helicopter rotors. The analysis, is based on a solution of the Ffowcs-Williams-Hawkings (FW-H) equation and includes terms accounting for both the thickness and loading components of the rotational noise. Computations are carried out in the time domain and assume free field conditions. Results of the correlation show that the Farrassat/Nystrom analysis, when using predicted airload data as input, yields fair but encouraging correlation for the first 6 harmonics of blade passage. It also suggests that although the analysis represents a valuable first step towards developing a truly comprehensive helicopter rotor noise prediction capability, further work remains to be done identifying and incorporating additional noise mechanisms into the code.

  16. Analysis of the possibility of using G.729 codec for steganographic transmission

    NASA Astrophysics Data System (ADS)

    Piotrowski, Zbigniew; Ciołek, Michał; Dołowski, Jerzy; Wojtuń, Jarosław

    2017-04-01

    Network steganography is dedicated in particular for those communication services for which there are no bridges or nodes carrying out unintentional attacks on steganographic sequence. In order to set up a hidden communication channel the method of data encoding and decoding was implemented using code books of codec G.729. G.729 codec includes, in its construction, linear prediction vocoder CS-ACELP (Conjugate Structure Algebraic Code Excited Linear Prediction), and by modifying the binary content of the codebook, it is easy to change a binary output stream. The article describes the results of research on the selection of these bits of the codebook codec G.729 which the negation of the least have influence to the loss of quality and fidelity of the output signal. The study was performed with the use of subjective and objective listening tests.

  17. Frequency- and Time-Domain Methods in Soil-Structure Interaction Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolisetti, Chandrakanth; Whittaker, Andrew S.; Coleman, Justin L.

    2015-06-01

    Soil-structure interaction (SSI) analysis in the nuclear industry is currently performed using linear codes that function in the frequency domain. There is a consensus that these frequency-domain codes give reasonably accurate results for low-intensity ground motions that result in almost linear response. For higher intensity ground motions, which may result in nonlinear response in the soil, structure or at the vicinity of the foundation, the adequacy of frequency-domain codes is unproven. Nonlinear analysis, which is only possible in the time domain, is theoretically more appropriate in such cases. These methods are available but are rarely used due to the largemore » computational requirements and a lack of experience with analysts and regulators. This paper presents an assessment of the linear frequency-domain code, SASSI, which is widely used in the nuclear industry, and the time-domain commercial finite-element code, LS-DYNA, for SSI analysis. The assessment involves benchmarking the SSI analysis procedure in LS-DYNA against SASSI for linearly elastic models. After affirming that SASSI and LS-DYNA result in almost identical responses for these models, they are used to perform nonlinear SSI analyses of two structures founded on soft soil. An examination of the results shows that, in spite of using identical material properties, the predictions of frequency- and time-domain codes are significantly different in the presence of nonlinear behavior such as gapping and sliding of the foundation.« less

  18. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  19. A Three-Dimensional Linearized Unsteady Euler Analysis for Turbomachinery Blade Rows

    NASA Technical Reports Server (NTRS)

    Montgomery, Matthew D.; Verdon, Joseph M.

    1996-01-01

    A three-dimensional, linearized, Euler analysis is being developed to provide an efficient unsteady aerodynamic analysis that can be used to predict the aeroelastic and aeroacoustic response characteristics of axial-flow turbomachinery blading. The field equations and boundary conditions needed to describe nonlinear and linearized inviscid unsteady flows through a blade row operating within a cylindrical annular duct are presented. In addition, a numerical model for linearized inviscid unsteady flow, which is based upon an existing nonlinear, implicit, wave-split, finite volume analysis, is described. These aerodynamic and numerical models have been implemented into an unsteady flow code, called LINFLUX. A preliminary version of the LINFLUX code is applied herein to selected, benchmark three-dimensional, subsonic, unsteady flows, to illustrate its current capabilities and to uncover existing problems and deficiencies. The numerical results indicate that good progress has been made toward developing a reliable and useful three-dimensional prediction capability. However, some problems, associated with the implementation of an unsteady displacement field and numerical errors near solid boundaries, still exist. Also, accurate far-field conditions must be incorporated into the FINFLUX analysis, so that this analysis can be applied to unsteady flows driven be external aerodynamic excitations.

  20. hi-class: Horndeski in the Cosmic Linear Anisotropy Solving System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumalacárregui, Miguel; Bellini, Emilio; Sawicki, Ignacy

    We present the public version of hi-class (www.hiclass-code.net), an extension of the Boltzmann code CLASS to a broad ensemble of modifications to general relativity. In particular, hi-class can calculate predictions for models based on Horndeski's theory, which is the most general scalar-tensor theory described by second-order equations of motion and encompasses any perfect-fluid dark energy, quintessence, Brans-Dicke, f ( R ) and covariant Galileon models. hi-class has been thoroughly tested and can be readily used to understand the impact of alternative theories of gravity on linear structure formation as well as for cosmological parameter extraction.

  1. Investigation of charge weight and shock factor effect on non-linear transient structural response of rectangular plates subjected to underwater explosion (UNDEX) shock loading

    NASA Astrophysics Data System (ADS)

    Demir, Ozgur; Sahin, Abdurrahman; Yilmaz, Tamer

    2012-09-01

    Underwater explosion induced shock loads are capable of causing considerable structural damage. Investigations of the underwater explosion (UNDEX) effects on structures have seen continuous developments because of security risks. Most of the earlier experimental investigations were performed by military since the World War I. Subsequently; Cole [1] established mathematical relations for modeling underwater explosion shock loading, which were the outcome of many experimental investigations This study predicts and establishes the transient responses of a panel structure to underwater explosion shock loads using non-linear finite element code Ls-Dyna. Accordingly, in this study a new MATLAB code has been developed for predicting shock loading profile for different weight of explosive and different shock factors. Numerical analysis was performed for various test conditions and results are compared with Ramajeyathilagam's experimental study [8].

  2. Learning to Estimate Dynamical State with Probabilistic Population Codes.

    PubMed

    Makin, Joseph G; Dichter, Benjamin K; Sabes, Philip N

    2015-11-01

    Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH)-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.

  3. Learning to Estimate Dynamical State with Probabilistic Population Codes

    PubMed Central

    Sabes, Philip N.

    2015-01-01

    Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. PMID:26540152

  4. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  5. Lossless Compression of Data into Fixed-Length Packets

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2009-01-01

    A computer program effects lossless compression of data samples from a one-dimensional source into fixed-length data packets. The software makes use of adaptive prediction: it exploits the data structure in such a way as to increase the efficiency of compression beyond that otherwise achievable. Adaptive linear filtering is used to predict each sample value based on past sample values. The difference between predicted and actual sample values is encoded using a Golomb code.

  6. Method for transition prediction in high-speed boundary layers, phase 2

    NASA Astrophysics Data System (ADS)

    Herbert, T.; Stuckert, G. K.; Lin, N.

    1993-09-01

    The parabolized stability equations (PSE) are a new and more reliable approach to analyzing the stability of streamwise varying flows such as boundary layers. This approach has been previously validated for idealized incompressible flows. Here, the PSE are formulated for highly compressible flows in general curvilinear coordinates to permit the analysis of high-speed boundary-layer flows over fairly general bodies. Vigorous numerical studies are carried out to study convergence and accuracy of the linear-stability code LSH and the linear/nonlinear PSE code PSH. Physical interfaces are set up to analyze the M = 8 boundary layer over a blunt cone calculated by using a thin-layer Navier Stokes (TNLS) code and the flow over a sharp cone at angle of attack calculated using the AFWAL parabolized Navier-Stokes (PNS) code. While stability and transition studies at high speeds are far from routine, the method developed here is the best tool available to research the physical processes in high-speed boundary layers.

  7. A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)

    NASA Technical Reports Server (NTRS)

    Kelly, J. J.; Abu-Khajeel, H.

    1997-01-01

    This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.

  8. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  9. Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles

    NASA Astrophysics Data System (ADS)

    Mini, C.; Hogue, T. S.; Pincetl, S.

    2012-04-01

    Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.

  10. Is phonology bypassed in normal or dyslexic development?

    PubMed

    Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D

    1987-01-01

    A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill only in adulthood.

  11. Transient Vibration Prediction for Rotors on Ball Bearings Using Load-dependent Non-linear Bearing Stiffness

    NASA Technical Reports Server (NTRS)

    Fleming, David P.; Poplawski, J. V.

    2002-01-01

    Rolling-element bearing forces vary nonlinearly with bearing deflection. Thus an accurate rotordynamic transient analysis requires bearing forces to be determined at each step of the transient solution. Analyses have been carried out to show the effect of accurate bearing transient forces (accounting for non-linear speed and load dependent bearing stiffness) as compared to conventional use of average rolling-element bearing stiffness. Bearing forces were calculated by COBRA-AHS (Computer Optimized Ball and Roller Bearing Analysis - Advanced High Speed) and supplied to the rotordynamics code ARDS (Analysis of Rotor Dynamic Systems) for accurate simulation of rotor transient behavior. COBRA-AHS is a fast-running 5 degree-of-freedom computer code able to calculate high speed rolling-element bearing load-displacement data for radial and angular contact ball bearings and also for cylindrical and tapered roller beatings. Results show that use of nonlinear bearing characteristics is essential for accurate prediction of rotordynamic behavior.

  12. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  13. 3D-MHD Simulations of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Bayliss, R. A.; Forest, C. B.; Wright, J. C.; O'Connell, R.

    2003-10-01

    Growth, saturation and turbulent evolution of the Madison dynamo experiment is investigated numerically using a 3-D pseudo-spectral simulation of the MHD equations; results of the simulations are used to predict behavior of the experiment. The code solves the self-consistent full evolution of the magnetic and velocity fields. The code uses a spectral representation via spherical harmonic basis functions of the vector fields in longitude and latitude, and fourth order finite differences in the radial direction. The magnetic field evolution has been benchmarked against the laminar kinematic dynamo predicted by M.L. Dudley and R.W. James [Proc. R. Soc. Lond. A 425. 407-429 (1989)]. Initial results indicate that saturation of the magnetic field occurs so that the resulting perturbed backreaction of the induced magnetic field changes the velocity field such that it would no longer be linearly unstable, suggesting non-linear terms are necessary for explaining the resulting state. Saturation and self-excitation depend in detail upon the magnetic Prandtl number.

  14. FPGA implementation of predictive degradation model for engine oil lifetime

    NASA Astrophysics Data System (ADS)

    Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.

    2018-03-01

    This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lao, Lang L.; St John, Holger; Staebler, Gary M.

    This report describes the work done under U.S. Department of Energy grant number DE-FG02-07ER54935 for the period ending July 31, 2010. The goal of this project was to provide predictive transport analysis to the PTRANSP code. Our contribution to this effort consisted of three parts: (a) a predictive solver suitable for use with highly non-linear transport models and installation of the turbulent confinement models GLF23 and TGLF, (b) an interface of this solver with the PTRANSP code, and (c) initial development of an EPED1 edge pedestal model interface with PTRANSP. PTRANSP has been installed locally on this cluster by importingmore » a complete PTRANSP build environment that always contains the proper version of the libraries and other object files that PTRANSP requires. The GCNMP package and its interface code have been added to the SVN repository at PPPL.« less

  16. Broadband Polarization Conversion Metasurface Based on Metal Cut-Wire Structure for Radar Cross Section Reduction.

    PubMed

    Yang, Jia Ji; Cheng, Yong Zhi; Ge, Chen Chen; Gong, Rong Zhou

    2018-04-19

    A class of linear polarization conversion coding metasurfaces (MSs) based on a metal cut-wire structure is proposed, which can be applied to the reduction properties of radar cross section (RCS). We firstly present a hypothesis based on the principle of planar array theory, and then verify the RCS reduction characteristics using linear polarization conversion coding MSs by simulations and experiments. The simulated results show that in the frequency range of 6⁻14 GHz, the linear polarization conversion ratio reaches a maximum value of 90%, which is in good agreement with the theoretical predictions. For normal incident x - and y -polarized waves, RCS reduction of designed coding MSs 01/01 and 01/10 is essentially more than 10 dB in the above-mentioned frequency range. We prepare and measure the 01/10 coding MS sample, and find that the experimental results in terms of reflectance and RCS reduction are in good agreement with the simulated ones under normal incidence. In addition, under oblique incidence, RCS reduction is suppressed as the angle of incidence increases, but still exhibits RCS reduction effects in a certain frequency range. The designed MS is expected to have valuable potential in applications for stealth field technology.

  17. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  18. Performance optimization for rotors in hover and axial flight

    NASA Technical Reports Server (NTRS)

    Quackenbush, T. R.; Wachspress, D. A.; Kaufman, A. E.; Bliss, D. B.

    1989-01-01

    Performance optimization for rotors in hover and axial flight is a topic of continuing importance to rotorcraft designers. The aim of this Phase 1 effort has been to demonstrate that a linear optimization algorithm could be coupled to an existing influence coefficient hover performance code. This code, dubbed EHPIC (Evaluation of Hover Performance using Influence Coefficients), uses a quasi-linear wake relaxation to solve for the rotor performance. The coupling was accomplished by expanding of the matrix of linearized influence coefficients in EHPIC to accommodate design variables and deriving new coefficients for linearized equations governing perturbations in power and thrust. These coefficients formed the input to a linear optimization analysis, which used the flow tangency conditions on the blade and in the wake to impose equality constraints on the expanded system of equations; user-specified inequality contraints were also employed to bound the changes in the design. It was found that this locally linearized analysis could be invoked to predict a design change that would produce a reduction in the power required by the rotor at constant thrust. Thus, an efficient search for improved versions of the baseline design can be carried out while retaining the accuracy inherent in a free wake/lifting surface performance analysis.

  19. Development of a linearized unsteady Euler analysis for turbomachinery blade rows

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Montgomery, Matthew D.; Kousen, Kenneth A.

    1995-01-01

    A linearized unsteady aerodynamic analysis for axial-flow turbomachinery blading is described in this report. The linearization is based on the Euler equations of fluid motion and is motivated by the need for an efficient aerodynamic analysis that can be used in predicting the aeroelastic and aeroacoustic responses of blade rows. The field equations and surface conditions required for inviscid, nonlinear and linearized, unsteady aerodynamic analyses of three-dimensional flow through a single, blade row operating within a cylindrical duct, are derived. An existing numerical algorithm for determining time-accurate solutions of the nonlinear unsteady flow problem is described, and a numerical model, based upon this nonlinear flow solver, is formulated for the first-harmonic linear unsteady problem. The linearized aerodynamic and numerical models have been implemented into a first-harmonic unsteady flow code, called LINFLUX. At present this code applies only to two-dimensional flows, but an extension to three-dimensions is planned as future work. The three-dimensional aerodynamic and numerical formulations are described in this report. Numerical results for two-dimensional unsteady cascade flows, excited by prescribed blade motions and prescribed aerodynamic disturbances at inlet and exit, are also provided to illustrate the present capabilities of the LINFLUX analysis.

  20. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  1. Experimental Analysis of Steel Beams Subjected to Fire Enhanced by Brillouin Scattering-Based Fiber Optic Sensor Data.

    PubMed

    Bao, Yi; Chen, Yizheng; Hoehler, Matthew S; Smith, Christopher M; Bundy, Matthew; Chen, Genda

    2017-01-01

    This paper presents high temperature measurements using a Brillouin scattering-based fiber optic sensor and the application of the measured temperatures and building code recommended material parameters into enhanced thermomechanical analysis of simply supported steel beams subjected to combined thermal and mechanical loading. The distributed temperature sensor captures detailed, nonuniform temperature distributions that are compared locally with thermocouple measurements with less than 4.7% average difference at 95% confidence level. The simulated strains and deflections are validated using measurements from a second distributed fiber optic (strain) sensor and two linear potentiometers, respectively. The results demonstrate that the temperature-dependent material properties specified in the four investigated building codes lead to strain predictions with less than 13% average error at 95% confidence level and that the Europe building code provided the best predictions. However, the implicit consideration of creep in Europe is insufficient when the beam temperature exceeds 800°C.

  2. Towards a better understanding of critical gradients and near-marginal turbulence in burning plasma conditions

    NASA Astrophysics Data System (ADS)

    Holland, C.; Candy, J.; Howard, N. T.

    2017-10-01

    Developing accurate predictive transport models of burning plasma conditions is essential for confident prediction and optimization of next step experiments such as ITER and DEMO. Core transport in these plasmas is expected to be very small in gyroBohm-normalized units, such that the plasma should lie close to the critical gradients for onset of microturbulence instabilities. We present recent results investigating the scaling of linear critical gradients of ITG, TEM, and ETG modes as a function of parameters such as safety factor, magnetic shear, and collisionality for nominal conditions and geometry expected in ITER H-mode plasmas. A subset of these results is then compared against predictions from nonlinear gyrokinetic simulations, to quantify differences between linear and nonlinear thresholds. As part of this study, linear and nonlinear results from both GYRO and CGYRO codes will be compared against each other, as well as to predictions from the quasilinear TGLF model. Challenges arising from near-marginal turbulence dynamics are addressed. This work was supported by the US Department of Energy under US DE-SC0006957.

  3. The Increased Sensitivity of Irregular Peripheral Canal and Otolith Vestibular Afferents Optimizes their Encoding of Natural Stimuli

    PubMed Central

    Schneider, Adam D.; Jamali, Mohsen; Carriot, Jerome; Chacron, Maurice J.

    2015-01-01

    Efficient processing of incoming sensory input is essential for an organism's survival. A growing body of evidence suggests that sensory systems have developed coding strategies that are constrained by the statistics of the natural environment. Consequently, it is necessary to first characterize neural responses to natural stimuli to uncover the coding strategies used by a given sensory system. Here we report for the first time the statistics of vestibular rotational and translational stimuli experienced by rhesus monkeys during natural (e.g., walking, grooming) behaviors. We find that these stimuli can reach intensities as high as 1500 deg/s and 8 G. Recordings from afferents during naturalistic rotational and linear motion further revealed strongly nonlinear responses in the form of rectification and saturation, which could not be accurately predicted by traditional linear models of vestibular processing. Accordingly, we used linear–nonlinear cascade models and found that these could accurately predict responses to naturalistic stimuli. Finally, we tested whether the statistics of natural vestibular signals constrain the neural coding strategies used by peripheral afferents. We found that both irregular otolith and semicircular canal afferents, because of their higher sensitivities, were more optimized for processing natural vestibular stimuli as compared with their regular counterparts. Our results therefore provide the first evidence supporting the hypothesis that the neural coding strategies used by the vestibular system are matched to the statistics of natural stimuli. PMID:25855169

  4. PubMed

    Trinker, Horst

    2011-10-28

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859-2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound.

  5. Evaluation of Linear, Inviscid, Viscous, and Reduced-Order Modeling Aeroelastic Solutions of the AGARD 445.6 Wing Using Root Locus Analysis

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Perry, Boyd III; Chwalowski, Pawel

    2014-01-01

    Reduced-order modeling (ROM) methods are applied to the CFD-based aeroelastic analysis of the AGARD 445.6 wing in order to gain insight regarding well-known discrepancies between the aeroelastic analyses and the experimental results. The results presented include aeroelastic solutions using the inviscid CAP-TSD code and the FUN3D code (Euler and Navier-Stokes). Full CFD aeroelastic solutions and ROM aeroelastic solutions, computed at several Mach numbers, are presented in the form of root locus plots in order to better reveal the aeroelastic root migrations with increasing dynamic pressure. Important conclusions are drawn from these results including the ability of the linear CAP-TSD code to accurately predict the entire experimental flutter boundary (repeat of analyses performed in the 1980's), that the Euler solutions at supersonic conditions indicate that the third mode is always unstable, and that the FUN3D Navier-Stokes solutions stabilize the unstable third mode seen in the Euler solutions.

  6. Burner liner thermal/structural load modeling: TRANCITS program user's manual

    NASA Technical Reports Server (NTRS)

    Maffeo, R.

    1985-01-01

    Transfer Analysis Code to Interface Thermal/Structural Problems (TRANCITS) is discussed. The TRANCITS code satisfies all the objectives for transferring thermal data between heat transfer and structural models of combustor liners and it can be used as a generic thermal translator between heat transfer and stress models of any component, regardless of the geometry. The TRANCITS can accurately and efficiently convert the temperature distributions predicted by the heat transfer programs to those required by the stress codes. It can be used for both linear and nonlinear structural codes and can produce nodal temperatures, elemental centroid temperatures, or elemental Gauss point temperatures. The thermal output of both the MARC and SINDA heat transfer codes can be interfaced directly with TRANCITS, and it will automatically produce stress model codes formatted for NASTRAN and MARC. Any thermal program and structural program can be interfaced by using the neutral input and output forms supported by TRANCITS.

  7. Simulation of cryogenic turbopump annular seals

    NASA Astrophysics Data System (ADS)

    Palazzolo, Alan B.

    1992-12-01

    The goal of the current work is to develop software that can accurately predict the dynamic coefficients, forces, leakage and horsepower loss for annular seals which have a potential for affecting the rotordynamic behavior of the pumps. The fruit of last year's research was the computer code SEALPAL which included capabilities for linear tapered geometry, Moody friction factor and inlet pre-swirl. This code produced results which in most cases compared very well with check cases presented in the literature. TAMUSEAL Icode, which was written to improve SEALPAL by correcting a bug and by adding more accurate integration algorithms and additional capabilities, was then used to predict dynamic coefficients and leakage for the NASA/Pratt and Whitney Alternate Turbopump Development (ATD) LOX Pump's seal.

  8. Simulation of Shear Alfvén Waves in LAPD using the BOUT++ code

    NASA Astrophysics Data System (ADS)

    Wei, Di; Friedman, B.; Carter, T. A.; Umansky, M. V.

    2011-10-01

    The linear and nonlinear physics of shear Alfvén waves is investigated using the 3D Braginskii fluid code BOUT++. The code has been verified against analytical calculations for the dispersion of kinetic and inertial Alfvén waves. Various mechanisms for forcing Alfvén waves in the code are explored, including introducing localized current sources similar to physical antennas used in experiments. Using this foundation, the code is used to model nonlinear interactions among shear Alfvén waves in a cylindrical magnetized plasma, such as that found in the Large Plasma Device (LAPD) at UCLA. In the future this investigation will allow for examination of the nonlinear interactions between shear Alfvén waves in both laboratory and space plasmas in order to compare to predictions of MHD turbulence.

  9. Linear stability theory and three-dimensional boundary layer transition

    NASA Technical Reports Server (NTRS)

    Spall, Robert E.; Malik, Mujeeb R.

    1992-01-01

    The viewgraphs and discussion of linear stability theory and three dimensional boundary layer transition are provided. The ability to predict, using analytical tools, the location of boundary layer transition over aircraft-type configurations is of great importance to designers interested in laminar flow control (LFC). The e(sup N) method has proven to be fairly effective in predicting, in a consistent manner, the location of the onset of transition for simple geometries in low disturbance environments. This method provides a correlation between the most amplified single normal mode and the experimental location of the onset of transition. Studies indicate that values of N between 8 and 10 correlate well with the onset of transition. For most previous calculations, the mean flows were restricted to two-dimensional or axisymmetric cases, or have employed simple three-dimensional mean flows (e.g., rotating disk, infinite swept wing, or tapered swept wing with straight isobars). Unfortunately, for flows over general wing configurations, and for nearly all flows over fuselage-type bodies at incidence, the analysis of fully three-dimensional flow fields is required. Results obtained for the linear stability of fully three-dimensional boundary layers formed over both wing and fuselage-type geometries, and for both high and low speed flows are discussed. When possible, transition estimates form the e(sup N) method are compared to experimentally determined locations. The stability calculations are made using a modified version of the linear stability code COSAL. Mean flows were computed using both Navier Stokes and boundary-layer codes.

  10. PC programs for the prediction of the linear stability behavior of liquid propellant propulsion systems and application to current MSFC rocket engine test programs, volume 1

    NASA Technical Reports Server (NTRS)

    Doane, George B., III; Armstrong, W. C.

    1990-01-01

    Research on propulsion stability (chugging and acoustic modes), and propellant valve control was investigated. As part of the activation of the new liquid propulsion test facilities, it is necessary to analyze total propulsion system stability. To accomplish this, several codes were built to run on desktop 386 machines. These codes enable one to analyze the stability question associated with the propellant feed systems. In addition, further work was adapted to this computing environment and furnished along with other codes. This latter inclusion furnishes those interested in high frequency oscillatory combustion behavior (that does not couple to the feed system) a set of codes for study of proposed liquid rocket engines.

  11. Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8

    NASA Astrophysics Data System (ADS)

    Sison, Virgilio; Remillion, Monica

    2017-10-01

    Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.

  12. Three-Dimensional Nacelle Aeroacoustics Code With Application to Impedance Education

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.

    2000-01-01

    A three-dimensional nacelle acoustics code that accounts for uniform mean flow and variable surface impedance liners is developed. The code is linked to a commercial version of the NASA-developed General Purpose Solver (for solution of linear systems of equations) in order to obtain the capability to study high frequency waves that may require millions of grid points for resolution. Detailed, single-processor statistics for the performance of the solver in rigid and soft-wall ducts are presented. Over the range of frequencies of current interest in nacelle liner research, noise attenuation levels predicted from the code were in excellent agreement with those predicted from mode theory. The equation solver is memory efficient, requiring only a small fraction of the memory available on modern computers. As an application, the code is combined with an optimization algorithm and used to reduce the impedance spectrum of a ceramic liner. The primary problem with using the code to perform optimization studies at frequencies above I1kHz is the excessive CPU time (a major portion of which is matrix assembly). The research recommends that research be directed toward development of a rapid sparse assembler and exploitation of the multiprocessor capability of the solver to further reduce CPU time.

  13. Simulation of ion-temperature-gradient turbulence in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, B I; Dimits, A M; Kim, C

    Results are presented from nonlinear gyrokinetic simulations of toroidal ion temperature gradient (ITG) turbulence and transport. The gyrokinetic simulations are found to yield values of the thermal diffusivity significantly lower than gyrofluid or IFS-PPPL-model predictions. A new phenomenon of nonlinear effective critical gradients larger than the linear instability threshold gradients is observed, and is associated with undamped flux-surface-averaged shear flows. The nonlinear gyrokineic codes have passed extensive validity tests which include comparison against independent linear calculations, a series of nonlinear convergence tests, and a comparison between two independent nonlinear gyrokinetic codes. Our most realistic simulations to date have actual reconstructedmore » equilibria from experiments and a model for dilution by impurity and beam ions. These simulations highlight the need for still more physics to be included in the simulations« less

  14. Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset

    PubMed Central

    Lipps, David; Devineni, Sree

    2016-01-01

    MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428

  15. A New Stochastic Equivalent Linearization Implementation for Prediction of Geometrically Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.

    1999-01-01

    In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.

  16. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique

    PubMed Central

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  17. Rapid Aeroelastic Analysis of Blade Flutter in Turbomachines

    NASA Technical Reports Server (NTRS)

    Trudell, J. J.; Mehmed, O.; Stefko, G. L.; Bakhle, M. A.; Reddy, T. S. R.; Montgomery, M.; Verdon, J.

    2006-01-01

    The LINFLUX-AE computer code predicts flutter and forced responses of blades and vanes in turbomachines under subsonic, transonic, and supersonic flow conditions. The code solves the Euler equations of unsteady flow in a blade passage under the assumption that the blades vibrate harmonically at small amplitudes. The steady-state nonlinear Euler equations are solved by a separate program, then equations for unsteady flow components are obtained through linearization around the steady-state solution. A structural-dynamics analysis (see figure) is performed to determine the frequencies and mode shapes of blade vibrations, a preprocessor interpolates mode shapes from the structural-dynamics mesh onto the LINFLUX computational-fluid-dynamics mesh, and an interface code is used to convert the steady-state flow solution to a form required by LINFLUX. Then LINFLUX solves the linearized equations in the frequency domain to calculate the unsteady aerodynamic pressure distribution for a given vibration mode, frequency, and interblade phase angle. A post-processor uses the unsteady pressures to calculate generalized aerodynamic forces, response amplitudes, and eigenvalues (which determine the flutter frequency and damping). In comparison with the TURBO-AE aeroelastic-analysis code, which solves the equations in the time domain, LINFLUX-AE is 6 to 7 times faster.

  18. Prediction of bead area contact load at the tire-wheel interface using NASTRAN

    NASA Technical Reports Server (NTRS)

    Chen, C. H. S.

    1982-01-01

    The theoretical prediction of the bead area contact load at the tire wheel interface using NASTRAN is reported. The application of the linear code to a basically nonlinear problem results in excessive deformation of the structure and the tire-wheel contact conditions become impossible to achieve. A psuedo-nonlinear approach was adopted in which the moduli of the cord reinforced composite are increased so that the computed key deformations matched that of the experiment. Numerical results presented are discussed.

  19. A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation

    NASA Astrophysics Data System (ADS)

    Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.

    2018-03-01

    We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.

  20. Coding stimulus amplitude by correlated neural activity

    NASA Astrophysics Data System (ADS)

    Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.

    2015-04-01

    While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.

  1. Linear and nonlinear verification of gyrokinetic microstability codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bravenec, R. V.; Candy, J.; Barnes, M.

    2011-12-15

    Verification of nonlinear microstability codes is a necessary step before comparisons or predictions of turbulent transport in toroidal devices can be justified. By verification we mean demonstrating that a code correctly solves the mathematical model upon which it is based. Some degree of verification can be accomplished indirectly from analytical instability threshold conditions, nonlinear saturation estimates, etc., for relatively simple plasmas. However, verification for experimentally relevant plasma conditions and physics is beyond the realm of analytical treatment and must rely on code-to-code comparisons, i.e., benchmarking. The premise is that the codes are verified for a given problem or set ofmore » parameters if they all agree within a specified tolerance. True verification requires comparisons for a number of plasma conditions, e.g., different devices, discharges, times, and radii. Running the codes and keeping track of linear and nonlinear inputs and results for all conditions could be prohibitive unless there was some degree of automation. We have written software to do just this and have formulated a metric for assessing agreement of nonlinear simulations. We present comparisons, both linear and nonlinear, between the gyrokinetic codes GYRO[J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and GS2[W. Dorland, F. Jenko, M. Kotschenreuther, and B. N. Rogers, Phys. Rev. Lett. 85, 5579 (2000)]. We do so at the mid-radius for the same discharge as in earlier work [C. Holland, A. E. White, G. R. McKee, M. W. Shafer, J. Candy, R. E. Waltz, L. Schmitz, and G. R. Tynan, Phys. Plasmas 16, 052301 (2009)]. The comparisons include electromagnetic fluctuations, passing and trapped electrons, plasma shaping, one kinetic impurity, and finite Debye-length effects. Results neglecting and including electron collisions (Lorentz model) are presented. We find that the linear frequencies with or without collisions agree well between codes, as do the time averages of the nonlinear fluxes without collisions. With collisions, the differences between the time-averaged fluxes are larger than the uncertainties defined as the oscillations of the fluxes, with the GS2 fluxes consistently larger (or more positive) than those from GYRO. However, the electrostatic fluxes are much smaller than those without collisions (the electromagnetic energy flux is negligible in both cases). In fact, except for the electron energy fluxes, the absolute magnitudes of the differences in fluxes with collisions are the same or smaller than those without. None of the fluxes exhibit large absolute differences between codes. Beyond these results, the specific linear and nonlinear benchmarks proposed here, as well as the underlying methodology, provide the basis for a wide variety of future verification efforts.« less

  2. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  3. Microphysics of Waves and Instabilities in the Solar Wind and their Macro Manifestations in the Corona and Interplanetary Space

    NASA Technical Reports Server (NTRS)

    Habbal, Shadia R.; Gurman, Joseph (Technical Monitor)

    2003-01-01

    Investigations of the physical processes responsible for the acceleration of the solar wind were pursued with the development of two new solar wind codes: a hybrid code and a 2-D MHD code. Hybrid simulations were performed to investigate the interaction between ions and parallel propagating low frequency ion cyclotron waves in a homogeneous plasma. In a low-beta plasma such as the solar wind plasma in the inner corona, the proton thermal speed is much smaller than the Alfven speed. Vlasov linear theory predicts that protons are not in resonance with low frequency ion cyclotron waves. However, non-linear effect makes it possible that these waves can strongly heat and accelerate protons. This study has important implications for study of the corona and the solar wind. Low frequency ion cyclotron waves or Alfven waves are commonly observed in the solar wind. Until now, it is believed that these waves are not able to heat the solar wind plasma unless some cascading processes transfer the energy of these waves to high frequency part. However, this study shows that these waves may directly heat and accelerate protons non-linearly. This process may play an important role in the coronal heating and the solar wind acceleration, at least in some parameter space.

  4. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  5. Experimental Analysis of Steel Beams Subjected to Fire Enhanced by Brillouin Scattering-Based Fiber Optic Sensor Data

    PubMed Central

    Bao, Yi; Chen, Yizheng; Hoehler, Matthew S.; Smith, Christopher M.; Bundy, Matthew; Chen, Genda

    2016-01-01

    This paper presents high temperature measurements using a Brillouin scattering-based fiber optic sensor and the application of the measured temperatures and building code recommended material parameters into enhanced thermomechanical analysis of simply supported steel beams subjected to combined thermal and mechanical loading. The distributed temperature sensor captures detailed, nonuniform temperature distributions that are compared locally with thermocouple measurements with less than 4.7% average difference at 95% confidence level. The simulated strains and deflections are validated using measurements from a second distributed fiber optic (strain) sensor and two linear potentiometers, respectively. The results demonstrate that the temperature-dependent material properties specified in the four investigated building codes lead to strain predictions with less than 13% average error at 95% confidence level and that the Europe building code provided the best predictions. However, the implicit consideration of creep in Europe is insufficient when the beam temperature exceeds 800°C. PMID:28239230

  6. Characteristics of a KA-band third-harmonic peniotron driven by a high-quality linear axis-encircling electron beam

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoyun; Tuo, Xianguo; Ge, Qing; Peng, Ying

    2017-12-01

    We employ a high-quality linear axis-encircling electron beam generated by a Cuccia coupler to drive a Ka-band third-harmonic peniotron and develop a self-consistent nonlinear calculation code to numerically analyze the characteristics of the designed peniotron. It is demonstrated that through a Cuccia coupler, a 6 kV, 0.5 A pencil beam and an input microwave power of 16 kW at 10 GHz can generate a 37 kV, 0.5 A linear axis-encircling beam, and it is characterized by a very low velocity spread. Moreover, the electron beam guiding center deviation can be adjusted easily. Driven by such a beam, a 30 GHz, Ka-band third-harmonic peniotron is predicted to achieve a conversion efficiency of 51.0% and a microwave output power of 9.44 kW; the results are in good agreement with the Magic3D simulation. Using this code, we studied the factors influencing the peniotron performance, and it can provide some guidelines for the design of a Ka-band third-harmonic peniotron driven by a linear electron beam and can promote the application of high-harmonic peniotrons in practice.

  7. MHD Simulations of Plasma Dynamics with Non-Axisymmetric Boundaries

    NASA Astrophysics Data System (ADS)

    Hansen, Chris; Levesque, Jeffrey; Morgan, Kyle; Jarboe, Thomas

    2015-11-01

    The arbitrary geometry, 3D extended MHD code PSI-TET is applied to linear and non-linear simulations of MCF plasmas with non-axisymmetric boundaries. Progress and results from simulations on two experiments will be presented: 1) Detailed validation studies of the HIT-SI experiment with self-consistent modeling of plasma dynamics in the helicity injectors. Results will be compared to experimental data and NIMROD simulations that model the effect of the helicity injectors through boundary conditions on an axisymmetric domain. 2) Linear studies of HBT-EP with different wall configurations focusing on toroidal asymmetries in the adjustable conducting wall. HBT-EP studies the effect of active/passive stabilization with an adjustable ferritic wall. Results from linear verification and benchmark studies of ideal mode growth with and without toroidal asymmetries will be presented and compared to DCON predictions. Simulations of detailed experimental geometries are enabled by use of the PSI-TET code, which employs a high order finite element method on unstructured tetrahedral grids that are generated directly from CAD models. Further development of PSI-TET will also be presented including work to support resistive wall regions within extended MHD simulations. Work supported by DoE.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  9. Property Changes in Aqueous Solutions due to Surfactant Treatment of PCE: Implications to Geophysical Measurements

    NASA Astrophysics Data System (ADS)

    Werkema, D. D.

    2007-12-01

    Select physicochemical properties of aqueous solutions composed of surfactants, dye, and perchloroethylene (PCE) were evaluated through a response surface quadratic design model of experiment. Nine surfactants, which are conventionally used in the remediation of PCE, were evaluated with varying concentrations of PCE and indicator dyes in aqueous solutions. Two hundred forty experiments were performed using PCE as a numerical factor (coded A) from 0 to 200 parts per million (ppm), dye type (coded B) as a 3-level categorical factor, and surfactant type (coded C) as a 10-level categorical factor. Five responses were measured: temperature (°C), pH, conductivity (μS/cm), dissolved oxygen (DO, mg/L), and density (g/mL). Diagnostics proved a normally distributed predictable response for all measured responses except pH. The Box-Cox plot for transforms recommended a power transform for the conductivity response with lambda (λ) = 0.50, and for the DO response, λ =2.2. The overall mean of the temperature response proved to be a better predictor than the linear model. The conductivity response is best fitted with a linear model using significant coded terms B and C. Both DO and density also showed a linear model with coded terms A, B, and C for DO; and terms A and C for density. Some of the surfactant treatments of PCE significantly alter the conductivity, DO, and density of the aqueous solution. However, the magnitude of the density response is so small that it does not exceed the instrument tolerance. Results for the conductivity and DO responses provide predictive models for the surfactant treatment of PCE and may be useful in determining the potential for geophysically monitoring surfactant enhanced aquifer remediation (SEAR) of PCE. As the aqueous physicochemical properties change due to surfactant remediation efforts, so will the properties of the subsurface pore water which are influential factors in geophysical measurements. Geoelectrical methods are potentially the best suited to measure SEAR alterations in the subsurface because the conductivity of the pore fluid has the largest relative change. This research has provided predictive models for alterations in the physicochemical properties of the pore fluid to SEAR of PCE. Future investigations should address the contribution of the solid matrix in the subsurface and the solid-fluid interaction during SEAR of PCE contamination. Notice: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy. Mention of trade names or commercial products does not constitute endorsement or recommendation by EPA for use.

  10. Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas

    NASA Astrophysics Data System (ADS)

    Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.

    2009-12-01

    The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.

  11. Field Validation of the Stability Limit of a Multi MW Turbine

    NASA Astrophysics Data System (ADS)

    Kallesøe, Bjarne S.; Kragh, Knud A.

    2016-09-01

    Long slender blades of modern multi-megawatt turbines exhibit a flutter like instability at rotor speeds above a critical rotor speed. Knowing the critical rotor speed is crucial to a safe turbine design. The flutter like instability can only be estimated using geometrically non-linear aeroelastic codes. In this study, the estimated rotor speed stability limit of a 7 MW state of the art wind turbine is validated experimentally. The stability limit is estimated using Siemens Wind Powers in-house aeroelastic code, and the results show that the predicted stability limit is within 5% of the experimentally observed limit.

  12. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  13. Higher-order harmonics coupling in different free-electron laser codes

    NASA Astrophysics Data System (ADS)

    Giannessi, L.; Freund, H. P.; Musumeci, P.; Reiche, S.

    2008-08-01

    The capability for simulation of the dynamics of a free-electron laser including the higher-order harmonics in linear undulators exists in several existing codes as MEDUSA [H.P. Freund, S.G. Biedron, and S.V. Milton, IEEE J. Quantum Electron. 27 (2000) 243; H.P. Freund, Phys. Rev. ST-AB 8 (2005) 110701] and PERSEO [L. Giannessi, Overview of Perseo, a system for simulating FEL dynamics in Mathcad, < http://www.jacow.org>, in: Proceedings of FEL 2006 Conference, BESSY, Berlin, Germany, 2006, p. 91], and has been recently implemented in GENESIS 1.3 [See < http://www.perseo.enea.it>]. MEDUSA and GENESIS also include the dynamics of even harmonics induced by the coupling through the betatron motion. In addition MEDUSA, which is based on a non-wiggler averaged model, is capable of simulating the generation of even harmonics in the transversally cold beam regime, i.e. when the even harmonic coupling arises from non-linear effects associated with longitudinal particle dynamics and not to a finite beam emittance. In this paper a comparison between the predictions of the codes in different conditions is given.

  14. Application of Fast Multipole Methods to the NASA Fast Scattering Code

    NASA Technical Reports Server (NTRS)

    Dunn, Mark H.; Tinetti, Ana F.

    2008-01-01

    The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.

  15. Validation of a computer code for analysis of subsonic aerodynamic performance of wings with flaps in combination with a canard or horizontal tail and an application to optimization

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Darden, Christine M.; Mann, Michael J.

    1990-01-01

    Extensive correlations of computer code results with experimental data are employed to illustrate the use of a linearized theory, attached flow method for the estimation and optimization of the longitudinal aerodynamic performance of wing-canard and wing-horizontal tail configurations which may employ simple hinged flap systems. Use of an attached flow method is based on the premise that high levels of aerodynamic efficiency require a flow that is as nearly attached as circumstances permit. The results indicate that linearized theory, attached flow, computer code methods (modified to include estimated attainable leading-edge thrust and an approximate representation of vortex forces) provide a rational basis for the estimation and optimization of aerodynamic performance at subsonic speeds below the drag rise Mach number. Generally, good prediction of aerodynamic performance, as measured by the suction parameter, can be expected for near optimum combinations of canard or horizontal tail incidence and leading- and trailing-edge flap deflections at a given lift coefficient (conditions which tend to produce a predominantly attached flow).

  16. VLF Trimpi modelling on the path NWC-Dunedin using both finite element and 3D Born modelling

    NASA Astrophysics Data System (ADS)

    Nunn, D.; Hayakawa, K. B. M.

    1998-10-01

    This paper investigates the numerical modelling of VLF Trimpis, produced by a D region inhomogeneity on the great circle path. Two different codes are used to model Trimpis on the path NWC-Dunedin. The first is a 2D Finite Element Method Code (FEM), whose solutions are rigorous and valid in the strong scattering or non-Born limit. The second code is a 3D model that invokes the Born approximation. The predicted Trimpis from these codes compare very closely, thus confirming the validity of both models. The modal scattering matrices for both codes are analysed in some detail and are found to have a comparable structure. They indicate strong scattering between the dominant TM modes. Analysis of the scattering matrix from the FEM code shows that departure from linear Born behaviour occurs when the inhomogeneity has a horizontal scale size of about 100 km and a maximum electron density enhancement at 75 km altitude of about 6 electrons.

  17. X-33 XRS-2200 Linear Aerospike Engine Sea Level Plume Radiation

    NASA Technical Reports Server (NTRS)

    DAgostino, Mark G.; Lee, Young C.; Wang, Ten-See; Turner, Jim (Technical Monitor)

    2001-01-01

    Wide band plume radiation data were collected during ten sea level tests of a single XRS-2200 engine at the NASA Stennis Space Center in 1999 and 2000. The XRS-2200 is a liquid hydrogen/liquid oxygen fueled, gas generator cycle linear aerospike engine which develops 204,420 lbf thrust at sea level. Instrumentation consisted of six hemispherical radiometers and one narrow view radiometer. Test conditions varied from 100% to 57% power level (PL) and 6.0 to 4.5 oxidizer to fuel (O/F) ratio. Measured radiation rates generally increased with engine chamber pressure and mixture ratio. One hundred percent power level radiation data were compared to predictions made with the FDNS and GASRAD codes. Predicted levels ranged from 42% over to 7% under average test values.

  18. Getting more from accuracy and response time data: methods for fitting the linear ballistic accumulator.

    PubMed

    Donkin, Chris; Averell, Lee; Brown, Scott; Heathcote, Andrew

    2009-11-01

    Cognitive models of the decision process provide greater insight into response time and accuracy than do standard ANOVA techniques. However, such models can be mathematically and computationally difficult to apply. We provide instructions and computer code for three methods for estimating the parameters of the linear ballistic accumulator (LBA), a new and computationally tractable model of decisions between two or more choices. These methods-a Microsoft Excel worksheet, scripts for the statistical program R, and code for implementation of the LBA into the Bayesian sampling software WinBUGS-vary in their flexibility and user accessibility. We also provide scripts in R that produce a graphical summary of the data and model predictions. In a simulation study, we explored the effect of sample size on parameter recovery for each method. The materials discussed in this article may be downloaded as a supplement from http://brm.psychonomic-journals.org/content/supplemental.

  19. Effects of Changing Jaw Height on F1 during Babble: A Case Study at 9 Months

    ERIC Educational Resources Information Center

    Steeve, Roger W.

    2012-01-01

    An empirical gap exists in our understanding of the extent that mandibular kinematics modulate acoustic changes in natural babble productions of infants. Data were recorded from a normal developing 9-month-old infant. Mandibular position was tracked from the infant during vowel and canonical babble. Linear predictive coding analysis was used to…

  20. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  1. Palindromic Genes in the Linear Mitochondrial Genome of the Nonphotosynthetic Green Alga Polytomella magna

    PubMed Central

    Smith, David Roy; Hua, Jimeng; Archibald, John M.; Lee, Robert W.

    2013-01-01

    Organelle DNA is no stranger to palindromic repeats. But never has a mitochondrial or plastid genome been described in which every coding region is part of a distinct palindromic unit. While sequencing the mitochondrial DNA of the nonphotosynthetic green alga Polytomella magna, we uncovered precisely this type of genic arrangement. The P. magna mitochondrial genome is linear and made up entirely of palindromes, each containing 1–7 unique coding regions. Consequently, every gene in the genome is duplicated and in an inverted orientation relative to its partner. And when these palindromic genes are folded into putative stem-loops, their predicted translational start sites are often positioned in the apex of the loop. Gel electrophoresis results support the linear, 28-kb monomeric conformation of the P. magna mitochondrial genome. Analyses of other Polytomella taxa suggest that palindromic mitochondrial genes were present in the ancestor of the Polytomella lineage and lost or retained to various degrees in extant species. The possible origins and consequences of this bizarre genomic architecture are discussed. PMID:23940100

  2. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  3. Discrete coding of stimulus value, reward expectation, and reward prediction error in the dorsal striatum.

    PubMed

    Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro

    2015-11-01

    To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. Copyright © 2015 the American Physiological Society.

  4. Discrete coding of stimulus value, reward expectation, and reward prediction error in the dorsal striatum

    PubMed Central

    Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N.; Iijima, Toshio

    2015-01-01

    To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201

  5. SU-G-JeP3-09: Tumor Location Prediction Using Natural Respiratory Volume for Respiratory Gated Radiation Therapy (RGRT): System Verification Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, M; Jung, J; Yoon, D

    Purpose: Respiratory gated radiation therapy (RGRT) gives accurate results when a patient’s breathing is stable and regular. Thus, the patient should be fully aware during respiratory pattern training before undergoing the RGRT treatment. In order to bypass the process of respiratory pattern training, we propose a target location prediction system for RGRT that uses only natural respiratory volume, and confirm its application. Methods: In order to verify the proposed target location prediction system, an in-house phantom set was used. This set involves a chest phantom including target, external markers, and motion generator. Natural respiratory volume signals were generated using themore » random function in MATLAB code. In the chest phantom, the target takes a linear motion based on the respiratory signal. After a four-dimensional computed tomography (4DCT) scan of the in-house phantom, the motion trajectory was derived as a linear equation. The accuracy of the linear equation was compared with that of the motion algorithm used by the operating motion generator. In addition, we attempted target location prediction using random respiratory volume values. Results: The correspondence rate of the linear equation derived from the 4DCT images with the motion algorithm of the motion generator was 99.41%. In addition, the average error rate of target location prediction was 1.23% for 26 cases. Conclusion: We confirmed the applicability of our proposed target location prediction system for RGRT using natural respiratory volume. If additional clinical studies can be conducted, a more accurate prediction system can be realized without requiring respiratory pattern training.« less

  6. Development of a Solid Rocket Propellant Nonlinear Constitutive Theory

    DTIC Science & Technology

    1975-05-01

    14 21 29 15 28 27 45 34 44 28 43 22 16 41 lü 40 58 35 57 59 56 23 55 17 54 11 53 36 30 24 18 1? J 6 0.0 0.1 0.2 0.3...Analysis 123 SECTION 6 - TASK IV - FINITE ELEMENT CODE DEMONSTRATION 139 A. Work to be Accomplished 139 B. Original Task IV Effort 139 C. Task IV...Vlscoelastlc Predictions 55 and Fyperimental Data for Sollthane 113 6 Comparison of Linear Vlscoelastlc Predictions 56 and Experimental Data for

  7. Verification of the ideal magnetohydrodynamic response at rational surfaces in the VMEC code

    DOE PAGES

    Lazerson, Samuel A.; Loizu, Joaquim; Hirshman, Steven; ...

    2016-01-13

    The VMEC nonlinear ideal MHD equilibrium code [S. P. Hirshman and J. C. Whitson, Phys. Fluids 26, 3553 (1983)] is compared against analytic linear ideal MHD theory in a screw-pinch-like configuration. The focus of such analysis is to verify the ideal MHD response at magnetic surfaces which possess magnetic transform (ι) which is resonant with spectral values of the perturbed boundary harmonics. A large aspect ratio circular cross section zero-beta equilibrium is considered. This equilibrium possess a rational surface with safety factor q = 2 at a normalized flux value of 0.5. A small resonant boundary perturbation is introduced, excitingmore » a response at the resonant rational surface. The code is found to capture the plasma response as predicted by a newly developed analytic theory that ensures the existence of nested flux surfaces by allowing for a jump in rotational transform (ι=1/q). The VMEC code satisfactorily reproduces these theoretical results without the necessity of an explicit transform discontinuity (Δι) at the rational surface. It is found that the response across the rational surfaces depends upon both radial grid resolution and local shear (dι/dΦ, where ι is the rotational transform and Φ the enclosed toroidal flux). Calculations of an implicit Δι suggest that it does not arise due to numerical artifacts (attributed to radial finite differences in VMEC) or existence conditions for flux surfaces as predicted by linear theory (minimum values of Δι). Scans of the rotational transform profile indicate that for experimentally relevant levels of transform shear the response becomes increasing localised. Furthermore, careful examination of a large experimental tokamak equilibrium, with applied resonant fields, indicates that this shielding response is present, suggesting the phenomena is not limited to this verification exercise.« less

  8. Verification of GENE and GYRO with L-mode and I-mode plasmas in Alcator C-Mod

    NASA Astrophysics Data System (ADS)

    Mikkelsen, D. R.; Howard, N. T.; White, A. E.; Creely, A. J.

    2018-04-01

    Verification comparisons are carried out for L-mode and I-mode plasma conditions in Alcator C-Mod. We compare linear and nonlinear ion-scale calculations by the gyrokinetic codes GENE and GYRO to each other and to the experimental power balance analysis. The two gyrokinetic codes' linear growth rates and real frequencies are in good agreement throughout all the ion temperature gradient mode branches and most of the trapped electron mode branches of the kyρs spectra at r/a = 0.65, 0.7, and 0.8. The shapes of the toroidal mode spectra of heat fluxes in nonlinear simulations are very similar for kyρs ≤ 0.5, but in most cases GENE has a relatively higher heat flux than GYRO at higher mode numbers. The ratio of ion to electron heat flux is similar in the two codes' simulations, but the heat fluxes themselves do not agree in almost all cases. In the I-mode regime, GENE's heat fluxes are ˜3 times those from GYRO, and they are ˜60%-100% higher than GYRO in the L-mode conditions. The GYRO under-prediction of Qe is much reduced in GENE's L-mode simulations, and it is eliminated in the I-mode simulations. This largely improved agreement with the experimental electron heat flux is offset, however, by the large overshoot of GENE's ion heat fluxes, which are 2-3 times the experimental level, and its electron heat flux overshoot at r/a = 0.80 in the I-mode. Rotation effects can explain part of the difference between the two codes' predictions, but very significant differences remain in simulations without any rotation effects.

  9. Survey and analysis of research on supersonic drag-due-to-lift minimization with recommendations for wing design

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Mann, Michael J.

    1992-01-01

    A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.

  10. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less

  11. On entanglement-assisted quantum codes achieving the entanglement-assisted Griesmer bound

    NASA Astrophysics Data System (ADS)

    Li, Ruihu; Li, Xueliang; Guo, Luobin

    2015-12-01

    The theory of entanglement-assisted quantum error-correcting codes (EAQECCs) is a generalization of the standard stabilizer formalism. Any quaternary (or binary) linear code can be used to construct EAQECCs under the entanglement-assisted (EA) formalism. We derive an EA-Griesmer bound for linear EAQECCs, which is a quantum analog of the Griesmer bound for classical codes. This EA-Griesmer bound is tighter than known bounds for EAQECCs in the literature. For a given quaternary linear code {C}, we show that the parameters of the EAQECC that EA-stabilized by the dual of {C} can be determined by a zero radical quaternary code induced from {C}, and a necessary condition under which a linear EAQECC may achieve the EA-Griesmer bound is also presented. We construct four families of optimal EAQECCs and then show the necessary condition for existence of EAQECCs is also sufficient for some low-dimensional linear EAQECCs. The four families of optimal EAQECCs are degenerate codes and go beyond earlier constructions. What is more, except four codes, our [[n,k,d_{ea};c

  12. Investigation on the Capability of a Non Linear CFD Code to Simulate Wave Propagation

    DTIC Science & Technology

    2003-02-01

    Linear CFD Code to Simulate Wave Propagation Pedro de la Calzada Pablo Quintana Manuel Antonio Burgos ITP, S.A. Parque Empresarial Fernando avenida...mechanisms above presented, simulation of unsteady aerodynamics with linear and nonlinear CFD codes is an ongoing activity within the turbomachinery industry

  13. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  14. Calculations of Helium Bubble Evolution in the PISCES Experiments with Cluster Dynamics

    NASA Astrophysics Data System (ADS)

    Blondel, Sophie; Younkin, Timothy; Wirth, Brian; Lasa, Ane; Green, David; Canik, John; Drobny, Jon; Curreli, Davide

    2017-10-01

    Plasma surface interactions in fusion tokamak reactors involve an inherently multiscale, highly non-equilibrium set of phenomena, for which current models are inadequate to predict the divertor response to and feedback on the plasma. In this presentation, we describe the latest code developments of Xolotl, a spatially-dependent reaction diffusion cluster dynamics code to simulate the divertor surface response to fusion-relevant plasma exposure. Xolotl is part of a code-coupling effort to model both plasma and material simultaneously; the first benchmark for this effort is the series of PISCES linear device experiments. We will discuss the processes leading to surface morphology changes, which further affect erosion, as well as how Xolotl has been updated in order to communicate with other codes. Furthermore, we will show results of the sub-surface evolution of helium bubbles in tungsten as well as the material surface displacement under these conditions.

  15. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  16. Guidelines for VCCT-Based Interlaminar Fatigue and Progressive Failure Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Deobald, Lyle R.; Mabson, Gerald E.; Engelstad, Steve; Prabhakar, M.; Gurvich, Mark; Seneviratne, Waruna; Perera, Shenal; O'Brien, T. Kevin; Murri, Gretchen; Ratcliffe, James; hide

    2017-01-01

    This document is intended to detail the theoretical basis, equations, references and data that are necessary to enhance the functionality of commercially available Finite Element codes, with the objective of having functionality better suited for the aerospace industry in the area of composite structural analysis. The specific area of focus will be improvements to composite interlaminar fatigue and progressive interlaminar failure. Suggestions are biased towards codes that perform interlaminar Linear Elastic Fracture Mechanics (LEFM) using Virtual Crack Closure Technique (VCCT)-based algorithms [1,2]. All aspects of the science associated with composite interlaminar crack growth are not fully developed and the codes developed to predict this mode of failure must be programmed with sufficient flexibility to accommodate new functional relationships as the science matures.

  17. A recursive linear predictive vocoder

    NASA Astrophysics Data System (ADS)

    Janssen, W. A.

    1983-12-01

    A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.

  18. Testing and Life Prediction for Composite Rotor Hub Flexbeams

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2004-01-01

    A summary of several studies of delamination in tapered composite laminates with internal ply-drops is presented. Initial studies used 2D FE models to calculate interlaminar stresses at the ply-ending locations in linear tapered laminates under tension loading. Strain energy release rates for delamination in these laminates indicated that delamination would likely start at the juncture of the tapered and thin regions and grow unstably in both directions. Tests of glass/epoxy and graphite/epoxy linear tapered laminates under axial tension delaminated as predicted. Nonlinear tapered specimens were cut from a full-size helicopter rotor hub and were tested under combined constant axial tension and cyclic transverse bending loading to simulate the loading experienced by a rotorhub flexbeam in flight. For all the tested specimens, delamination began at the tip of the outermost dropped ply group and grew first toward the tapered region. A 2D FE model was created that duplicated the test flexbeam layup, geometry, and loading. Surface strains calculated by the model agreed very closely with the measured surface strains in the specimens. The delamination patterns observed in the tests were simulated in the model by releasing pairs of MPCs along those interfaces. Strain energy release rates associated with the delamination growth were calculated for several configurations and using two different FE analysis codes. Calculations from the codes agreed very closely. The strain energy release rate results were used with material characterization data to predict fatigue delamination onset lives for nonlinear tapered flexbeams with two different ply-dropping schemes. The predicted curves agreed well with the test data for each case studied.

  19. Maintaining a Critical Spectra within Monteburns for a Gas-Cooled Reactor Array by Way of Control Rod Manipulation

    DOE PAGES

    Adigun, Babatunde John; Fensin, Michael Lorne; Galloway, Jack D.; ...

    2016-10-01

    Our burnup study examined the effect of a predicted critical control rod position on the nuclide predictability of several axial and radial locations within a 4×4 graphite moderated gas cooled reactor fuel cluster geometry. To achieve this, a control rod position estimator (CRPE) tool was developed within the framework of the linkage code Monteburns between the transport code MCNP and depletion code CINDER90, and four methodologies were proposed within the tool for maintaining criticality. Two of the proposed methods used an inverse multiplication approach - where the amount of fissile material in a set configuration is slowly altered until criticalitymore » is attained - in estimating the critical control rod position. Another method carried out several MCNP criticality calculations at different control rod positions, then used a linear fit to estimate the critical rod position. The final method used a second-order polynomial fit of several MCNP criticality calculations at different control rod positions to guess the critical rod position. The results showed that consistency in prediction of power densities as well as uranium and plutonium isotopics was mutual among methods within the CRPE tool that predicted critical position consistently well. Finall, while the CRPE tool is currently limited to manipulating a single control rod, future work could be geared toward implementing additional criticality search methodologies along with additional features.« less

  20. Prediction of U-Mo dispersion nuclear fuels with Al-Si alloy using artificial neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Susmikanti, Mike, E-mail: mike@batan.go.id; Sulistyo, Jos, E-mail: soj@batan.go.id

    2014-09-30

    Dispersion nuclear fuels, consisting of U-Mo particles dispersed in an Al-Si matrix, are being developed as fuel for research reactors. The equilibrium relationship for a mixture component can be expressed in the phase diagram. It is important to analyze whether a mixture component is in equilibrium phase or another phase. The purpose of this research it is needed to built the model of the phase diagram, so the mixture component is in the stable or melting condition. Artificial neural network (ANN) is a modeling tool for processes involving multivariable non-linear relationships. The objective of the present work is to developmore » code based on artificial neural network models of system equilibrium relationship of U-Mo in Al-Si matrix. This model can be used for prediction of type of resulting mixture, and whether the point is on the equilibrium phase or in another phase region. The equilibrium model data for prediction and modeling generated from experimentally data. The artificial neural network with resilient backpropagation method was chosen to predict the dispersion of nuclear fuels U-Mo in Al-Si matrix. This developed code was built with some function in MATLAB. For simulations using ANN, the Levenberg-Marquardt method was also used for optimization. The artificial neural network is able to predict the equilibrium phase or in the phase region. The develop code based on artificial neural network models was built, for analyze equilibrium relationship of U-Mo in Al-Si matrix.« less

  1. NASA Lewis Stirling SPRE testing and analysis with reduced number of cooler tubes

    NASA Technical Reports Server (NTRS)

    Wong, Wayne A.; Cairelli, James E.; Swec, Diane M.; Doeberling, Thomas J.; Lakatos, Thomas F.; Madi, Frank J.

    1992-01-01

    Free-piston Stirling power converters are candidates for high capacity space power applications. The Space Power Research Engine (SPRE), a free-piston Stirling engine coupled with a linear alternator, is being tested at the NASA Lewis Research Center in support of the Civil Space Technology Initiative. The SPRE is used as a test bed for evaluating converter modifications which have the potential to improve the converter performance and for validating computer code predictions. Reducing the number of cooler tubes on the SPRE has been identified as a modification with the potential to significantly improve power and efficiency. Experimental tests designed to investigate the effects of reducing the number of cooler tubes on converter power, efficiency and dynamics are described. Presented are test results from the converter operating with a reduced number of cooler tubes and comparisons between this data and both baseline test data and computer code predictions.

  2. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  3. Experimental characterization and modelling of non-linear coupling of the lower hybrid current drive power on Tore Supra

    NASA Astrophysics Data System (ADS)

    Preynas, M.; Goniche, M.; Hillairet, J.; Litaudon, X.; Ekedahl, A.; Colas, L.

    2013-01-01

    To achieve steady-state operation on future fusion devices, in particular on ITER, the coupling of the lower hybrid wave must be optimized on a wide range of edge conditions. However, under some specific conditions, deleterious effects on the lower hybrid current drive (LHCD) coupling are sometimes observed on Tore Supra. In this way, dedicated LHCD experiments have been performed using the LHCD system of Tore Supra, composed of two different conceptual designs of launcher: the fully active multi-junction (FAM) and the new passive active multi-junction (PAM) antennas. A non-linear interaction between the electron density and the electric field has been characterized in a thin plasma layer in front of the two LHCD antennas. The resulting dependence of the power reflection coefficient (RC) with the LHCD power is not predicted by the standard linear theory of the LH wave coupling. A theoretical model is suggested to describe the non-linear wave-plasma interaction induced by the ponderomotive effect and implemented in a new full wave LHCD code, PICCOLO-2D (ponderomotive effect in a coupling code of lower hybrid wave-2D). The code self-consistently treats the wave propagation in the antenna vicinity and its interaction with the local edge plasma density. The simulation reproduces very well the occurrence of a non-linear behaviour in the coupling observed in the LHCD experiments. The important differences and trends between the FAM and the PAM antennas, especially a larger increase in RC for the FAM, are also reproduced by the PICCOLO-2D simulation. The working hypothesis of the contribution of the ponderomotive effect in the non-linear observations of LHCD coupling is therefore validated through this comprehensive modelling for the first time on the FAM and PAM antennas on Tore Supra.

  4. Air oxidation of Zircaloy-4 in the 600-1000 °C temperature range: Modeling for ASTEC code application

    NASA Astrophysics Data System (ADS)

    Coindreau, O.; Duriez, C.; Ederli, S.

    2010-10-01

    Progress in the treatment of air oxidation of zirconium in severe accident (SA) codes are required for a reliable analysis of severe accidents involving air ingress. Air oxidation of zirconium can actually lead to accelerated core degradation and increased fission product release, especially for the highly-radiotoxic ruthenium. This paper presents a model to simulate air oxidation kinetics of Zircaloy-4 in the 600-1000 °C temperature range. It is based on available experimental data, including separate-effect experiments performed at IRSN and at Forschungszentrum Karlsruhe. The kinetic transition, named "breakaway", from a diffusion-controlled regime to an accelerated oxidation is taken into account in the modeling via a critical mass gain parameter. The progressive propagation of the locally initiated breakaway is modeled by a linear increase in oxidation rate with time. Finally, when breakaway propagation is completed, the oxidation rate stabilizes and the kinetics is modeled by a linear law. This new modeling is integrated in the severe accident code ASTEC, jointly developed by IRSN and GRS. Model predictions and experimental data from thermogravimetric results show good agreement for different air flow rates and for slow temperature transient conditions.

  5. Comparisons of 'Identical' Simulations by the Eulerian Gyrokinetic Codes GS2 and GYRO

    NASA Astrophysics Data System (ADS)

    Bravenec, R. V.; Ross, D. W.; Candy, J.; Dorland, W.; McKee, G. R.

    2003-10-01

    A major goal of the fusion program is to be able to predict tokamak transport from first-principles theory. To this end, the Eulerian gyrokinetic code GS2 was developed years ago and continues to be improved [1]. Recently, the Eulerian code GYRO was developed [2]. These codes are not subject to the statistical noise inherent to particle-in-cell (PIC) codes, and have been very successful in treating electromagnetic fluctuations. GS2 is fully spectral in the radial coordinate while GYRO uses finite-differences and ``banded" spectral schemes. To gain confidence in nonlinear simulations of experiment with these codes, ``apples-to-apples" comparisons (identical profile inputs, flux-tube geometry, two species, etc.) are first performed. We report on a series of linear and nonlinear comparisons (with overall agreement) including kinetic electrons, collisions, and shaped flux surfaces. We also compare nonlinear simulations of a DIII-D discharge to measurements of not only the fluxes but also the turbulence parameters. [1] F. Jenko, et al., Phys. Plasmas 7, 1904 (2000) and refs. therein. [2] J. Candy, J. Comput. Phys. 186, 545 (2003).

  6. Autonomous orientation predicts longevity: New findings from the Nun Study.

    PubMed

    Weinstein, Netta; Legate, Nicole; Ryan, William S; Hemmy, Laura

    2018-03-10

    Work on longevity has found protective social, cognitive, and emotional factors, but to date we have little understanding of the impact of motivational dynamics. Autonomy orientation, or stable patterns of self-regulation, is theorized to be a protective factor for long-term mental and physical health (Ryan & Deci, 2017), and it is therefore a prime candidate for examining how stable psychosocial factors are linked to longevity, or life expectancy. Essays written in the 1930s by participants in the Nun Study were coded for indicators of an autonomy orientation. These were selected in line with an extensive theoretical literature based in self-determination theory (Deci & Ryan, 1985). Essays were coded for the propensity for choice in action, susceptibility to pressure, self-reflection, integration of experiences, and parental support for autonomy. These coded variables were used to predict age of death. Using 176 codable essays provided by now-deceased participants, linear regression analyses revealed that choiceful behavior, self-reflection, and parent autonomy support predicted age of death. Participants who demonstrated these stable and beneficial motivational characteristics lived longer. Personality constructs reflecting a healthy form of self-regulation are associated with long-term health. Implications for health interventions are discussed. © 2018 The Authors. Journal of Personality Published by Wiley Periodicals, Inc.

  7. Chroma intra prediction based on inter-channel correlation for HEVC.

    PubMed

    Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C

    2014-01-01

    In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.

  8. Visual communication with retinex coding.

    PubMed

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  9. Visual Communication with Retinex Coding

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  10. Evidence for Natural Selection in Nucleotide Content Relationships Based on Complete Mitochondrial Genomes: Strong Effect of Guanine Content on Separation between Terrestrial and Aquatic Vertebrates.

    PubMed

    Sorimachi, Kenji; Okayasu, Teiji

    2015-01-01

    The complete vertebrate mitochondrial genome consists of 13 coding genes. We used this genome to investigate the existence of natural selection in vertebrate evolution. From the complete mitochondrial genomes, we predicted nucleotide contents and then separated these values into coding and non-coding regions. When nucleotide contents of a coding or non-coding region were plotted against the nucleotide content of the complete mitochondrial genomes, we obtained linear regression lines only between homonucleotides and their analogs. On every plot using G or A content purine, G content in aquatic vertebrates was higher than that in terrestrial vertebrates, while A content in aquatic vertebrates was lower than that in terrestrial vertebrates. Based on these relationships, vertebrates were separated into two groups, terrestrial and aquatic. However, using C or T content pyrimidine, clear separation between these two groups was not obtained. The hagfish (Eptatretus burgeri) was further separated from both terrestrial and aquatic vertebrates. Based on these results, nucleotide content relationships predicted from the complete vertebrate mitochondrial genomes reveal the existence of natural selection based on evolutionary separation between terrestrial and aquatic vertebrate groups. In addition, we propose that separation of the two groups might be linked to ammonia detoxification based on high G and low A contents, which encode Glu rich and Lys poor proteins.

  11. Computational Assessment of Aft-Body Closure for the HSR Reference H Configuration

    NASA Technical Reports Server (NTRS)

    Londenberg, W. Kelly

    1999-01-01

    A study has been conducted to determine how well the USM3D unstructured Euler solver can be utilized to predict the flow over the High Speed Research (HSR) Reference H configuration with an ultimate goal of prediction of Sting interference so after body closure effects may be evaluated. This study has shown that the code can be used to predict the interference effects of a lower mounted blade sting with a high degree of confidence. It has been shown that wing and fuselage pressures, both levels and trends, can be predicted well. Force and moment levels are not predicted well but experimental trends are predicted. Based upon this, predicted force and moment increments are assumed to be predicted accurately. Deflection of the horizontal tail was found to cause a non-linear increment from the non-deflected sting interference effects.

  12. Comparison between measured and predicted turbulence frequency spectra in ITG and TEM regimes

    NASA Astrophysics Data System (ADS)

    Citrin, J.; Arnichand, H.; Bernardo, J.; Bourdelle, C.; Garbet, X.; Jenko, F.; Hacquin, S.; Pueschel, M. J.; Sabot, R.

    2017-06-01

    The observation of distinct peaks in tokamak core reflectometry measurements—named quasi-coherent-modes (QCMs)—are identified as a signature of trapped-electron-mode (TEM) turbulence (Arnichand et al 2016 Plasma Phys. Control. Fusion 58 014037). This phenomenon is investigated with detailed linear and nonlinear gyrokinetic simulations using the Gene code. A Tore-Supra density scan is studied, which traverses through a linear (LOC) to saturated (SOC) ohmic confinement transition. The LOC and SOC phases are both simulated separately. In the LOC phase, where QCMs are observed, TEMs are robustly predicted unstable in linear studies. In the later SOC phase, where QCMs are no longer observed, ion-temperature-gradient (ITG) modes are identified. In nonlinear simulations, in the ITG (SOC) phase, a broadband spectrum is seen. In the TEM (LOC) phase, a clear emergence of a peak at the TEM frequencies is seen. This is due to reduced nonlinear frequency broadening of the underlying linear modes in the TEM regime compared with the ITG regime. A synthetic diagnostic of the nonlinearly simulated frequency spectra reproduces the features observed in the reflectometry measurements. These results support the identification of core QCMs as an experimental marker for TEM turbulence.

  13. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding.

    PubMed

    Travnik, Jaden B; Pilarski, Patrick M

    2017-07-01

    Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

  14. Analysis of ELM stability with extended MHD models in JET, JT-60U and future JT-60SA tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Aiba, N.; Pamela, S.; Honda, M.; Urano, H.; Giroud, C.; Delabie, E.; Frassinetti, L.; Lupelli, I.; Hayashi, N.; Huijsmans, G.; JET Contributors, the; Research Unit, JT-60SA

    2018-01-01

    The stability with respect to a peeling-ballooning mode (PBM) was investigated numerically with extended MHD simulation codes in JET, JT-60U and future JT-60SA plasmas. The MINERVA-DI code was used to analyze the linear stability, including the effects of rotation and ion diamagnetic drift ({ω }* {{i}}), in JET-ILW and JT-60SA plasmas, and the JOREK code was used to simulate nonlinear dynamics with rotation, viscosity and resistivity in JT-60U plasmas. It was validated quantitatively that the ELM trigger condition in JET-ILW plasmas can be reasonably explained by taking into account both the rotation and {ω }* {{i}} effects in the numerical analysis. When deuterium poloidal rotation is evaluated based on neoclassical theory, an increase in the effective charge of plasma destabilizes the PBM because of an acceleration of rotation and a decrease in {ω }* {{i}}. The difference in the amount of ELM energy loss in JT-60U plasmas rotating in opposite directions was reproduced qualitatively with JOREK. By comparing the ELM affected areas with linear eigenfunctions, it was confirmed that the difference in the linear stability property, due not to the rotation direction but to the plasma density profile, is thought to be responsible for changing the ELM energy loss just after the ELM crash. A predictive study to determine the pedestal profiles in JT-60SA was performed by updating the EPED1 model to include the rotation and {ω }* {{i}} effects in the PBM stability analysis. It was shown that the plasma rotation predicted with the neoclassical toroidal viscosity degrades the pedestal performance by about 10% by destabilizing the PBM, but the pressure pedestal height will be high enough to achieve the target parameters required for the ITER-like shape inductive scenario in JT-60SA.

  15. Combustion-acoustic stability analysis for premixed gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo; Cowan, Lizabeth

    1995-01-01

    Lean, prevaporized, premixed combustors are susceptible to combustion-acoustic instabilities. A model was developed to predict eigenvalues of axial modes for combustion-acoustic interactions in a premixed combustor. This work extends previous work by including variable area and detailed chemical kinetics mechanisms, using the code LSENS. Thus the acoustic equations could be integrated through the flame zone. Linear perturbations were made of the continuity, momentum, energy, chemical species, and state equations. The qualitative accuracy of our approach was checked by examining its predictions for various unsteady heat release rate models. Perturbations in fuel flow rate are currently being added to the model.

  16. Error-Rate Bounds for Coded PPM on a Poisson Channel

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  17. Design and Processing of a Novel Chaos-Based Stepped Frequency Synthesized Wideband Radar Signal.

    PubMed

    Zeng, Tao; Chang, Shaoqiang; Fan, Huayu; Liu, Quanhua

    2018-03-26

    The linear stepped frequency and linear frequency shift keying (FSK) signal has been widely used in radar systems. However, such linear modulation signals suffer from the range-Doppler coupling that degrades radar multi-target resolution. Moreover, the fixed frequency-hopping or frequency-coded sequence can be easily predicted by the interception receiver in the electronic countermeasures (ECM) environments, which limits radar anti-jamming performance. In addition, the single FSK modulation reduces the radar low probability of intercept (LPI) performance, for it cannot achieve a large time-bandwidth product. To solve such problems, we propose a novel chaos-based stepped frequency (CSF) synthesized wideband signal in this paper. The signal introduces chaotic frequency hopping between the coherent stepped frequency pulses, and adopts a chaotic frequency shift keying (CFSK) and phase shift keying (PSK) composited coded modulation in a subpulse, called CSF-CFSK/PSK. Correspondingly, the processing method for the signal has been proposed. According to our theoretical analyses and the simulations, the proposed signal and processing method achieve better multi-target resolution and LPI performance. Furthermore, flexible modulation is able to increase the robustness against identification of the interception receiver and improve the anti-jamming performance of the radar.

  18. Variable frame rate transmission - A review of methodology and application to narrow-band LPC speech coding

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Makhoul, J.; Schwartz, R. M.; Huggins, A. W. F.

    1982-04-01

    The variable frame rate (VFR) transmission methodology developed, implemented, and tested in the years 1973-1978 for efficiently transmitting linear predictive coding (LPC) vocoder parameters extracted from the input speech at a fixed frame rate is reviewed. With the VFR method, parameters are transmitted only when their values have changed sufficiently over the interval since their preceding transmission. Two distinct approaches to automatic implementation of the VFR method are discussed. The first bases the transmission decisions on comparisons between the parameter values of the present frame and the last transmitted frame. The second, which is based on a functional perceptual model of speech, compares the parameter values of all the frames that lie in the interval between the present frame and the last transmitted frame against a linear model of parameter variation over that interval. Also considered is the application of VFR transmission to the design of narrow-band LPC speech coders with average bit rates of 2000-2400 bts/s.

  19. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    DOE PAGES

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  20. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  1. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  2. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  3. Pole-placement Predictive Functional Control for under-damped systems with real numbers algebra.

    PubMed

    Zabet, K; Rossiter, J A; Haber, R; Abdullah, M

    2017-11-01

    This paper presents the new algorithm of PP-PFC (Pole-placement Predictive Functional Control) for stable, linear under-damped higher-order processes. It is shown that while conventional PFC aims to get first-order exponential behavior, this is not always straightforward with significant under-damped modes and hence a pole-placement PFC algorithm is proposed which can be tuned more precisely to achieve the desired dynamics, but exploits complex number algebra and linear combinations in order to deliver guarantees of stability and performance. Nevertheless, practical implementation is easier by avoiding complex number algebra and hence a modified formulation of the PP-PFC algorithm is also presented which utilises just real numbers while retaining the key attributes of simple algebra, coding and tuning. The potential advantages are demonstrated with numerical examples and real-time control of a laboratory plant. Copyright © 2017 ISA. All rights reserved.

  4. Generalized Bezout's Theorem and its applications in coding theory

    NASA Technical Reports Server (NTRS)

    Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.

  5. Three dimensional nonlinear simulations of edge localized modes on the EAST tokamak using BOUT++ code

    NASA Astrophysics Data System (ADS)

    Liu, Z. X.; Xu, X. Q.; Gao, X.; Xia, T. Y.; Joseph, I.; Meyer, W. H.; Liu, S. C.; Xu, G. S.; Shao, L. M.; Ding, S. Y.; Li, G. Q.; Li, J. G.

    2014-09-01

    Experimental measurements of edge localized modes (ELMs) observed on the EAST experiment are compared to linear and nonlinear theoretical simulations of peeling-ballooning modes using the BOUT++ code. Simulations predict that the dominant toroidal mode number of the ELM instability becomes larger for lower current, which is consistent with the mode structure captured with visible light using an optical CCD camera. The poloidal mode number of the simulated pressure perturbation shows good agreement with the filamentary structure observed by the camera. The nonlinear simulation is also consistent with the experimentally measured energy loss during an ELM crash and with the radial speed of ELM effluxes measured using a gas puffing imaging diagnostic.

  6. A Linear Viscoelastic Model Calibration of Sylgard 184.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANLmore » data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.« less

  7. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  8. 3D Progressive Damage Modeling for Laminated Composite Based on Crack Band Theory and Continuum Damage Mechanics

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Pineda, Evan J.; Ranatunga, Vipul; Smeltzer, Stanley S.

    2015-01-01

    A simple continuum damage mechanics (CDM) based 3D progressive damage analysis (PDA) tool for laminated composites was developed and implemented as a user defined material subroutine to link with a commercially available explicit finite element code. This PDA tool uses linear lamina properties from standard tests, predicts damage initiation with an easy-to-implement Hashin-Rotem failure criteria, and in the damage evolution phase, evaluates the degradation of material properties based on the crack band theory and traction-separation cohesive laws. It follows Matzenmiller et al.'s formulation to incorporate the degrading material properties into the damaged stiffness matrix. Since nonlinear shear and matrix stress-strain relations are not implemented, correction factors are used for slowing the reduction of the damaged shear stiffness terms to reflect the effect of these nonlinearities on the laminate strength predictions. This CDM based PDA tool is implemented as a user defined material (VUMAT) to link with the Abaqus/Explicit code. Strength predictions obtained, using this VUMAT, are correlated with test data for a set of notched specimens under tension and compression loads.

  9. The prediction of human exons by oligonucleotide composition and discriminant analysis of spliceable open reading frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solovyev, V.V.; Salamov, A.A.; Lawrence, C.B.

    1994-12-31

    Discriminant analysis is applied to the problem of recognition 5`-, internal and 3`-exons in human DNA sequences. Specific recognition functions were developed for revealing exons of particular types. The method based on a splice site prediction algorithm that uses the linear Fisher discriminant to combine the information about significant triplet frequencies of various functional parts of splice site regions and preferences of oligonucleotide in protein coding and nation regions. The accuracy of our splice site recognition function is about 97%. A discriminant function for 5`-exon prediction includes hexanucleotide composition of upstream region, triplet composition around the ATG codon, ORF codingmore » potential, donor splice site potential and composition of downstream introit region. For internal exon prediction, we combine in a discriminant function the characteristics describing the 5`- intron region, donor splice site, coding region, acceptor splice site and Y-intron region for each open reading frame flanked by GT and AG base pairs. The accuracy of precise internal exon recognition on a test set of 451 exon and 246693 pseudoexon sequences is 77% with a specificity of 79% and a level of pseudoexon ORF prediction of 99.96%. The recognition quality computed at the level of individual nucleotides is 89%, for exon sequences and 98% for intron sequences. A discriminant function for 3`-exon prediction includes octanucleolide composition of upstream nation region, triplet composition around the stop codon, ORF coding potential, acceptor splice site potential and hexanucleotide composition of downstream region. We unite these three discriminant functions in exon predicting program FEX (find exons). FEX exactly predicts 70% of 1016 exons from the test of 181 complete genes with specificity 73%, and 89% exons are exactly or partially predicted. On the average, 85% of nucleotides were predicted accurately with specificity 91%.« less

  10. The Deterministic Mine Burial Prediction System

    DTIC Science & Technology

    2009-01-12

    or below the water-line, initial linear and angular velocities, and fall angle relative to the mine’s axis of symmetry. Other input data needed...c. Run_DMBP.m: start-up MATLAB script for the program 2. C:\\DMBP\\DMBP_src: This directory contains source code, geotechnical databases, and...approved for public release). b. \\Impact_35: The IMPACT35 model c. \\MakeTPARfiles: scripts for creating wave height and wave period input data from

  11. Sensory Information Processing and Symbolic Computation

    DTIC Science & Technology

    1973-12-31

    plague all image deblurring methods when working with high signal to noise ratios, is that of a ringing or ghost image phenomenon which surrounds high...Figure 11 The Impulse Response of an All-Pass Random Phase Filter 24 Figure 12 (a) Unsmoothed Log Spectra of the Sentence "The pipe began to...of automatic deblurring of images, linear predictive coding of speech and the refinement and application of mathematical models of human vision and

  12. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less

  13. Classification of breast tissue in mammograms using efficient coding.

    PubMed

    Costa, Daniel D; Campos, Lúcio F; Barros, Allan K

    2011-06-24

    Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.

  14. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  15. Unified aeroacoustics analysis for high speed turboprop aerodynamics and noise. Volume 3: Application of theory for blade loading, wakes, noise, and wing shielding

    NASA Technical Reports Server (NTRS)

    Hanson, D. B.; Mccolgan, C. J.; Ladden, R. M.; Klatte, R. J.

    1991-01-01

    Results of the program for the generation of a computer prediction code for noise of advanced single rotation, turboprops (prop-fans) such as the SR3 model are presented. The code is based on a linearized theory developed at Hamilton Standard in which aerodynamics and acoustics are treated as a unified process. Both steady and unsteady blade loading are treated. Capabilities include prediction of steady airload distributions and associated aerodynamic performance, unsteady blade pressure response to gust interaction or blade vibration, noise fields associated with thickness and steady and unsteady loading, and wake velocity fields associated with steady loading. The code was developed on the Hamilton Standard IBM computer and has now been installed on the Cray XMP at NASA-Lewis. The work had its genesis in the frequency domain acoustic theory developed at Hamilton Standard in the late 1970s. It was found that the method used for near field noise predictions could be adapted as a lifting surface theory for aerodynamic work via the pressure potential technique that was used for both wings and ducted turbomachinery. In the first realization of the theory for propellers, the blade loading was represented in a quasi-vortex lattice form. This was upgraded to true lifting surface loading. Originally, it was believed that a purely linear approach for both aerodynamics and noise would be adequate. However, two sources of nonlinearity in the steady aerodynamics became apparent and were found to be a significant factor at takeoff conditions. The first is related to the fact that the steady axial induced velocity may be of the same order of magnitude as the flight speed and the second is the formation of leading edge vortices which increases lift and redistribute loading. Discovery and properties of prop-fan leading edge vortices were reported in two papers. The Unified AeroAcoustic Program (UAAP) capabilites are demonstrated and the theory verified by comparison with the predictions with data from tests at NASA-Lewis. Steady aerodyanmic performance, unsteady blade loading, wakes, noise, and wing and boundary layer shielding are examined.

  16. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  17. Computation of Turbulent Wake Flows in Variable Pressure Gradient

    NASA Technical Reports Server (NTRS)

    Duquesne, N.; Carlson, J. R.; Rumsey, C. L.; Gatski, T. B.

    1999-01-01

    Transport aircraft performance is strongly influenced by the effectiveness of high-lift systems. Developing wakes generated by the airfoil elements are subjected to strong pressure gradients and can thicken very rapidly, limiting maximum lift. This paper focuses on the effects of various pressure gradients on developing symmetric wakes and on the ability of a linear eddy viscosity model and a non-linear explicit algebraic stress model to accurately predict their downstream evolution. In order to reduce the uncertainties arising from numerical issues when assessing the performance of turbulence models, three different numerical codes with the same turbulence models are used. Results are compared to available experimental data to assess the accuracy of the computational results.

  18. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  19. Stirling cryocooler test results and design model verification

    NASA Astrophysics Data System (ADS)

    Shimko, Martin A.; Stacy, W. D.; McCormick, John A.

    A long-life Stirling cycle cryocooler being developed for spaceborne applications is described. The results from tests on a preliminary breadboard version of the cryocooler used to demonstrate the feasibility of the technology and to validate the generator design code used in its development are presented. This machine achieved a cold-end temperature of 65 K while carrying a 1/2-W cooling load. The basic machine is a double-acting, flexure-bearing, split Stirling design with linear electromagnetic drives for the expander and compressors. Flat metal diaphragms replace pistons for sweeping and sealing the machine working volumes. The double-acting expander couples to a laminar-channel counterflow recuperative heat exchanger for regeneration. The PC-compatible design code developed for this design approach calculates regenerator loss, including heat transfer irreversibilities, pressure drop, and axial conduction in the regenerator walls. The code accurately predicted cooler performance and assisted in diagnosing breadboard machine flaws during shakedown and development testing.

  20. Asymptotic/numerical analysis of supersonic propeller noise

    NASA Technical Reports Server (NTRS)

    Myers, M. K.; Wydeven, R.

    1989-01-01

    An asymptotic analysis based on the Mach surface structure of the field of a supersonic helical source distribution is applied to predict thickness and loading noise radiated by high speed propeller blades. The theory utilizes an integral representation of the Ffowcs-Williams Hawkings equation in a fully linearized form. The asymptotic results are used for chordwise strips of the blade, while required spanwise integrations are performed numerically. The form of the analysis enables predicted waveforms to be interpreted in terms of Mach surface propagation. A computer code developed to implement the theory is described and found to yield results in close agreement with more exact computations.

  1. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  2. DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos

    NASA Astrophysics Data System (ADS)

    Carbone, Carmelita; Petkova, Margarita; Dolag, Klaus

    2016-07-01

    We present, for the first time in the literature, a full reconstruction of the total (linear and non-linear) ISW/Rees-Sciama effect in the presence of massive neutrinos, together with its cross-correlations with CMB-lensing and weak-lensing signals. The present analyses make use of all-sky maps extracted via ray-tracing across the gravitational potential distribution provided by the ``Dark Energy and Massive Neutrino Universe'' (DEMNUni) project, a set of large-volume, high-resolution cosmological N-body simulations, where neutrinos are treated as separate collisionless particles. We correctly recover, at 1-2% accuracy, the linear predictions from CAMB. Concerning the CMB-lensing and weak-lensing signals, we also recover, with similar accuracy, the signal predicted by Boltzmann codes, once non-linear neutrino corrections to HALOFIT are accounted for. Interestingly, in the ISW/Rees-Sciama signal, and its cross correlation with lensing, we find an excess of power with respect to the massless case, due to free streaming neutrinos, roughly at the transition scale between the linear and non-linear regimes. The excess is ~ 5 - 10% at l ~ 100 for the ISW/Rees-Sciama auto power spectrum, depending on the total neutrino mass Mν, and becomes a factor of ~ 4 for Mν = 0.3 eV, at l ~ 600, for the ISW/Rees-Sciama cross power with CMB-lensing. This effect should be taken into account for the correct estimation of the CMB temperature bispectrum in the presence of massive neutrinos.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazerson, Samuel A.; Loizu, Joaquim; Hirshman, Steven

    The VMEC nonlinear ideal MHD equilibrium code [S. P. Hirshman and J. C. Whitson, Phys. Fluids 26, 3553 (1983)] is compared against analytic linear ideal MHD theory in a screw-pinch-like configuration. The focus of such analysis is to verify the ideal MHD response at magnetic surfaces which possess magnetic transform (ι) which is resonant with spectral values of the perturbed boundary harmonics. A large aspect ratio circular cross section zero-beta equilibrium is considered. This equilibrium possess a rational surface with safety factor q = 2 at a normalized flux value of 0.5. A small resonant boundary perturbation is introduced, excitingmore » a response at the resonant rational surface. The code is found to capture the plasma response as predicted by a newly developed analytic theory that ensures the existence of nested flux surfaces by allowing for a jump in rotational transform (ι=1/q). The VMEC code satisfactorily reproduces these theoretical results without the necessity of an explicit transform discontinuity (Δι) at the rational surface. It is found that the response across the rational surfaces depends upon both radial grid resolution and local shear (dι/dΦ, where ι is the rotational transform and Φ the enclosed toroidal flux). Calculations of an implicit Δι suggest that it does not arise due to numerical artifacts (attributed to radial finite differences in VMEC) or existence conditions for flux surfaces as predicted by linear theory (minimum values of Δι). Scans of the rotational transform profile indicate that for experimentally relevant levels of transform shear the response becomes increasing localised. Furthermore, careful examination of a large experimental tokamak equilibrium, with applied resonant fields, indicates that this shielding response is present, suggesting the phenomena is not limited to this verification exercise.« less

  4. Advanced composites structural concepts and materials technologies for primary aircraft structures: Structural response and failure analysis

    NASA Technical Reports Server (NTRS)

    Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.

    1992-01-01

    Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.

  5. Effect of a Diffusion Zone on Fatigue Crack Propagation in Layered FGMs

    NASA Astrophysics Data System (ADS)

    Hauber, Brett; Brockman, Robert; Paulino, Glaucio

    2008-02-01

    Research into functionally graded materials (FGMs) has led to advances in our ability to analyze cracks. However, two prominent aspects remain relatively unexplored: 1) development and validation of modeling methods for fatigue crack propagation in FGMs, and 2) experimental validation of stress intensity models in engineered materials such as two phase monolithic and graded materials. This work addresses some of these problems for a limited set of conditions, material systems (e.g., Ti/TiB), and material gradients. Numerical analyses are conducted for single edge notch bend (SENB) specimens. Stress intensity factors are computed using the specialized finite element code I-Franc (Illinois Fracture Analysis Code), which is tailored for both homogeneous and graded materials, as well as Franc2DL and ABAQUS. Crack extension is considered by means of specified crack increments, together with fatigue evaluations to predict crack propagation life. Results will be used to determine linear material gradient parameters that are significant for prediction of fatigue crack growth behavior.

  6. Geographical variation of cerebrovascular disease in New York State: the correlation with income

    PubMed Central

    Han, Daikwon; Carrow, Shannon S; Rogerson, Peter A; Munschauer, Frederick E

    2005-01-01

    Background Income is known to be associated with cerebrovascular disease; however, little is known about the more detailed relationship between cerebrovascular disease and income. We examined the hypothesis that the geographical distribution of cerebrovascular disease in New York State may be predicted by a nonlinear model using income as a surrogate socioeconomic risk factor. Results We used spatial clustering methods to identify areas with high and low prevalence of cerebrovascular disease at the ZIP code level after smoothing rates and correcting for edge effects; geographic locations of high and low clusters of cerebrovascular disease in New York State were identified with and without income adjustment. To examine effects of income, we calculated the excess number of cases using a non-linear regression with cerebrovascular disease rates taken as the dependent variable and income and income squared taken as independent variables. The resulting regression equation was: excess rate = 32.075 - 1.22*10-4(income) + 8.068*10-10(income2), and both income and income squared variables were significant at the 0.01 level. When income was included as a covariate in the non-linear regression, the number and size of clusters of high cerebrovascular disease prevalence decreased. Some 87 ZIP codes exceeded the critical value of the local statistic yielding a relative risk of 1.2. The majority of low cerebrovascular disease prevalence geographic clusters disappeared when the non-linear income effect was included. For linear regression, the excess rate of cerebrovascular disease falls with income; each $10,000 increase in median income of each ZIP code resulted in an average reduction of 3.83 observed cases. The significant nonlinear effect indicates a lessening of this income effect with increasing income. Conclusion Income is a non-linear predictor of excess cerebrovascular disease rates, with both low and high observed cerebrovascular disease rate areas associated with higher income. Income alone explains a significant amount of the geographical variance in cerebrovascular disease across New York State since both high and low clusters of cerebrovascular disease dissipate or disappear with income adjustment. Geographical modeling, including non-linear effects of income, may allow for better identification of other non-traditional risk factors. PMID:16242043

  7. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    PubMed

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. CREME96 and Related Error Rate Prediction Methods

    NASA Technical Reports Server (NTRS)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and Analysis of Cosmic Ray Effects in Electronics). The Single Event Figure of Merit method was also revised to use the solar minimum galactic cosmic ray spectrum and extended to circular orbits down to 200 km at any inclination. More recently a series of commercial codes was developed by TRAD (Test & Radiations) which includes the OMERE code which calculates single event effects. There are other error rate prediction methods which use Monte Carlo techniques. In this chapter the analytic methods for estimating the environment within spacecraft will be discussed.

  9. Genome Sequence of the Bacterium Streptomyces davawensis JCM 4913 and Heterologous Production of the Unique Antibiotic Roseoflavin

    PubMed Central

    Jankowitsch, Frank; Schwarz, Julia; Rückert, Christian; Gust, Bertolt; Szczepanowski, Rafael; Blom, Jochen; Pelzer, Stefan; Kalinowski, Jörn

    2012-01-01

    Streptomyces davawensis JCM 4913 synthesizes the antibiotic roseoflavin, a structural riboflavin (vitamin B2) analog. Here, we report the 9,466,619-bp linear chromosome of S. davawensis JCM 4913 and a 89,331-bp linear plasmid. The sequence has an average G+C content of 70.58% and contains six rRNA operons (16S-23S-5S) and 69 tRNA genes. The 8,616 predicted protein-coding sequences include 32 clusters coding for secondary metabolites, several of which are unique to S. davawensis. The chromosome contains long terminal inverted repeats of 33,255 bp each and atypical telomeres. Sequence analysis with regard to riboflavin biosynthesis revealed three different patterns of gene organization in Streptomyces species. Heterologous expression of a set of genes present on a subgenomic fragment of S. davawensis resulted in the production of roseoflavin by the host Streptomyces coelicolor M1152. Phylogenetic analysis revealed that S. davawensis is a close relative of Streptomyces cinnabarinus, and much to our surprise, we found that the latter bacterium is a roseoflavin producer as well. PMID:23043000

  10. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Bose, Benjamin; Koyama, Kazuya

    2017-08-01

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model which is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with <= 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpch <= s <= 180Mpc/h. Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.

  11. Linear dependence of surface expansion speed on initial plasma temperature in warm dense matter

    DOE PAGES

    Bang, Woosuk; Albright, Brian James; Bradley, Paul Andrew; ...

    2016-07-12

    Recent progress in laser-driven quasi-monoenergetic ion beams enabled the production of uniformly heated warm dense matter. Matter heated rapidly with this technique is under extreme temperatures and pressures, and promptly expands outward. While the expansion speed of an ideal plasma is known to have a square-root dependence on temperature, computer simulations presented here show a linear dependence of expansion speed on initial plasma temperature in the warm dense matter regime. The expansion of uniformly heated 1–100 eV solid density gold foils was modeled with the RAGE radiation-hydrodynamics code, and the average surface expansion speed was found to increase linearly withmore » temperature. The origin of this linear dependence is explained by comparing predictions from the SESAME equation-of-state tables with those from the ideal gas equation-of-state. In conclusion, these simulations offer useful insight into the expansion of warm dense matter and motivate the application of optical shadowgraphy for temperature measurement.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qiang, E-mail: cq0405@126.com; Luoyang Electronic Equipment Testing Center, Luoyang 471000; Chen, Bin, E-mail: emcchen@163.com

    The Rayleigh-Taylor (R-T) instabilities are important hydrodynamics and magnetohydrodynamics (MHD) phenomena that are found in systems in high energy density physics and normal fluids. The formation and evolution of the R-T instability at channel boundary during back-flow of the lightning return stroke are analyzed using the linear perturbation theory and normal mode analysis methods, and the linear growth rate of the R-T instability in typical condition for lightning return stroke channel is obtained. Then, the R-T instability phenomena of lightning return stroke are simulated using a two-dimensional Eulerian finite volumes resistive radiation MHD code. The numerical results show that themore » evolution characteristics of the R-T instability in the early stage of back-flow are consistent with theoretical predictions obtained by linear analysis. The simulation also yields more evolution characteristics for the R-T instability beyond the linear theory. The results of this work apply to some observed features of the return stroke channel and further advance previous theoretical and experimental work.« less

  13. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.

  14. Non linear predictive control of a LEGO mobile robot

    NASA Astrophysics Data System (ADS)

    Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.

    2014-10-01

    Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.

  15. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  16. Experimental and numerical investigation of development of disturbances in the boundary layer on sharp and blunted cone

    NASA Astrophysics Data System (ADS)

    Borisov, S. P.; Bountin, D. A.; Gromyko, Yu. V.; Khotyanovsky, D. V.; Kudryavtsev, A. N.

    2016-10-01

    Development of disturbances in the supersonic boundary layer on sharp and blunted cones is studied both experimentally and theoretically. The experiments were conducted at the Transit-M hypersonic wind tunnel of the Institute of Theoretical and Applied Mechanics. Linear stability calculations use the basic flow profiles provided by the numerical simulations performed by solving the Navier-Stokes equations with the ANSYS Fluent and the in-house CFS3D code. Both the global pseudospectral Chebyshev method and the local iteration procedure are employed to solve the eigenvalue problem and determine linear stability characteristics. The calculated amplification factors for disturbances of various frequencies are compared with the experimentally measured pressure fluctuation spectra at different streamwise positions. It is shown that the linear stability calculations predict quite accurately the frequency of the most amplified disturbances and enable us to estimate reasonably well their relative amplitudes.

  17. AX-GADGET: a new code for cosmological simulations of Fuzzy Dark Matter and Axion models

    NASA Astrophysics Data System (ADS)

    Nori, Matteo; Baldi, Marco

    2018-05-01

    We present a new module of the parallel N-Body code P-GADGET3 for cosmological simulations of light bosonic non-thermal dark matter, often referred as Fuzzy Dark Matter (FDM). The dynamics of the FDM features a highly non-linear Quantum Potential (QP) that suppresses the growth of structures at small scales. Most of the previous attempts of FDM simulations either evolved suppressed initial conditions, completely neglecting the dynamical effects of QP throughout cosmic evolution, or resorted to numerically challenging full-wave solvers. The code provides an interesting alternative, following the FDM evolution without impairing the overall performance. This is done by computing the QP acceleration through the Smoothed Particle Hydrodynamics (SPH) routines, with improved schemes to ensure precise and stable derivatives. As an extension of the P-GADGET3 code, it inherits all the additional physics modules implemented up to date, opening a wide range of possibilities to constrain FDM models and explore its degeneracies with other physical phenomena. Simulations are compared with analytical predictions and results of other codes, validating the QP as a crucial player in structure formation at small scales.

  18. Slow Crack Growth and Fatigue Life Prediction of Ceramic Components Subjected to Variable Load History

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama

    2001-01-01

    Present capabilities of the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code has the capability to compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth (SCG) type failure conditions CARES/Life can handle the cases of sustained and linearly increasing time-dependent loads, while for cyclic fatigue applications various types of repetitive constant amplitude loads can be accounted for. In real applications applied loads are rarely that simple, but rather vary with time in more complex ways such as, for example, engine start up, shut down, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. The objective of this paper is to demonstrate a methodology capable of predicting the time-dependent reliability of components subjected to transient thermomechanical loads that takes into account the change in material response with time. In this paper, the dominant delayed failure mechanism is assumed to be SCG. This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code, which has also been modified to have the ability of interfacing with commercially available FEA codes executed for transient load histories. An example involving a ceramic exhaust valve subjected to combustion cycle loads is presented to demonstrate the viability of this methodology and the CARES/Life program.

  19. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  20. Stress concentration factors for circular, reinforced penetrations in pressurized cylindrical shells. Ph.D. Thesis - Virginia Univ.

    NASA Technical Reports Server (NTRS)

    Ramsey, J. W., Jr.

    1975-01-01

    The effect on stresses in a cylindrical shell with a circular penetration subject to internal pressure was investigated in thin, shallow linearly, elastic cylindrical shells. Results provide numerical predictions of peak stress concentration factors around nonreinforced and reinforced penetrations in pressurized cylindrical shells. Analytical results were correlated with published formulas, as well as theoretical and experimental results. An accuracy study was made of the finite element program for each of the configurations considered important in pressure vessel technology. A formula is developed to predict the peak stress concentration factor for analysis and/or design in conjunction with the ASME Boiler and Pressure Vessel Code.

  1. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  2. Advanced stability analysis for laminar flow control

    NASA Technical Reports Server (NTRS)

    Orszag, S. A.

    1981-01-01

    Five classes of problems are addressed: (1) the extension of the SALLY stability analysis code to the full eighth order compressible stability equations for three dimensional boundary layer; (2) a comparison of methods for prediction of transition using SALLY for incompressible flows; (3) a study of instability and transition in rotating disk flows in which the effects of Coriolis forces and streamline curvature are included; (4) a new linear three dimensional instability mechanism that predicts Reynolds numbers for transition to turbulence in planar shear flows in good agreement with experiment; and (5) a study of the stability of finite amplitude disturbances in axisymmetric pipe flow showing the stability of this flow to all nonlinear axisymmetric disturbances.

  3. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  4. Low sidelobe level and high time resolution for metallic ultrasonic testing with linear-chirp-Golay coded excitation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaying; Gang, Tie; Ye, Chaofeng; Cong, Sen

    2018-04-01

    Linear-chirp-Golay (LCG)-coded excitation combined with pulse compression is proposed in this paper to improve the time resolution and suppress sidelobe in ultrasonic testing. The LCG-coded excitation is binary complementary pair Golay signal with linear-chirp signal applied on every sub pulse. Compared with conventional excitation which is a common ultrasonic testing method using a brief narrow pulse as exciting signal, the performances of LCG-coded excitation, in terms of time resolution improvement and sidelobe suppression, are studied via numerical and experimental investigations. The numerical simulations are implemented using Matlab K-wave toolbox. It is seen from the simulation results that time resolution of LCG excitation is 35.5% higher and peak sidelobe level (PSL) is 57.6 dB lower than linear-chirp excitation with 2.4 MHz chirp bandwidth and 3 μs time duration. In the B-scan experiment, time resolution of LCG excitation is higher and PSL is lower than conventional brief pulse excitation and chirp excitation. In terms of time resolution, LCG-coded signal has better performance than chirp signal. Moreover, the impact of chirp bandwidth on LCG-coded signal is less than that on chirp signal. In addition, the sidelobe of LCG-coded signal is lower than that of chirp signal with pulse compression.

  5. Predictions of one-group interfacial area transport in TRACE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Worosz, T.; Talley, J. D.; Kim, S.

    In current nuclear reactor system analysis codes utilizing the two-fluid model, flow regime dependent correlations are used to specify the interfacial area concentration (a i). This approach does not capture the continuous evolution of the interfacial structures, and thus, it can pose issues near the transition boundaries. Consequently, a pilot version of the system analysis code TRACE is being developed that employs the interfacial area transport equation (IATE). In this approach, dynamic estimation of a i is provided through mechanistic models for bubble coalescence and breakup. The implementation of the adiabatic, one-group IATE into TRACE is assessed against experimental datamore » from 50 air-water, two-phase flow conditions in pipes ranging in inner diameter from 2.54 to 20.32 cm for both vertical co-current upward and downward flows. Predictions of pressure, void fraction, bubble velocity, and a i data are made. TRACE employing the conventional flow regime-based approach is found to underestimate a i and can only predict linear trends since the calculation is governed by the pressure. Furthermore, trends opposite to that of the data are predicted for some conditions. In contrast, TRACE with the one-group IATE demonstrates a significant improvement in predicting the experimental data with an average disagreement of {+-} 13%. Additionally, TRACE with the one-group IATE is capable of predicting nonlinear axial development of a, by accounting for various bubble interaction mechanisms, such as coalescence and disintegration. (authors)« less

  6. General linear codes for fault-tolerant matrix operations on processor arrays

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Abraham, J. A.

    1988-01-01

    Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.

  7. DEMNUni: ISW, Rees-Sciama, and weak-lensing in the presence of massive neutrinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbone, Carmelita; Petkova, Margarita; Dolag, Klaus, E-mail: carmelita.carbone@brera.inaf.it, E-mail: mpetkova@usm.lmu.de, E-mail: kdolag@mpa-garching.mpg.de

    2016-07-01

    We present, for the first time in the literature, a full reconstruction of the total (linear and non-linear) ISW/Rees-Sciama effect in the presence of massive neutrinos, together with its cross-correlations with CMB-lensing and weak-lensing signals. The present analyses make use of all-sky maps extracted via ray-tracing across the gravitational potential distribution provided by the ''Dark Energy and Massive Neutrino Universe'' (DEMNUni) project, a set of large-volume, high-resolution cosmological N-body simulations, where neutrinos are treated as separate collisionless particles. We correctly recover, at 1–2% accuracy, the linear predictions from CAMB. Concerning the CMB-lensing and weak-lensing signals, we also recover, with similarmore » accuracy, the signal predicted by Boltzmann codes, once non-linear neutrino corrections to HALOFIT are accounted for. Interestingly, in the ISW/Rees-Sciama signal, and its cross correlation with lensing, we find an excess of power with respect to the massless case, due to free streaming neutrinos, roughly at the transition scale between the linear and non-linear regimes. The excess is ∼ 5 – 10% at l ∼ 100 for the ISW/Rees-Sciama auto power spectrum, depending on the total neutrino mass M {sub ν}, and becomes a factor of ∼ 4 for M {sub ν} = 0.3 eV, at l ∼ 600, for the ISW/Rees-Sciama cross power with CMB-lensing. This effect should be taken into account for the correct estimation of the CMB temperature bispectrum in the presence of massive neutrinos.« less

  8. New quantum codes constructed from quaternary BCH codes

    NASA Astrophysics Data System (ADS)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  9. Discrete Spring Model for Predicting Delamination Growth in Z-Fiber Reinforced DCB Specimens

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; OBrien, T. Kevin

    2004-01-01

    Beam theory analysis was applied to predict delamination growth in Double Cantilever Beam (DCB) specimens reinforced in the thickness direction with pultruded pins, known as Z-fibers. The specimen arms were modeled as cantilever beams supported by discrete springs, which were included to represent the pins. A bi-linear, irreversible damage law was used to represent Z-fiber damage, the parameters of which were obtained from previous experiments. Closed-form solutions were developed for specimen compliance and displacements corresponding to Z-fiber row locations. A solution strategy was formulated to predict delamination growth, in which the parent laminate mode I critical strain energy release rate was used as the criterion for delamination growth. The solution procedure was coded into FORTRAN 90, giving a dedicated software tool for performing the delamination prediction. Comparison of analysis results with previous analysis and experiment showed good agreement, yielding an initial verification for the analytical procedure.

  10. Discrete Spring Model for Predicting Delamination Growth in Z-Fiber Reinforced DCB Specimens

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; O'Brien, T. Kevin

    2004-01-01

    Beam theory analysis was applied to predict delamination growth in DCB specimens reinforced in the thickness direction with pultruded pins, known as Z-fibers. The specimen arms were modeled as cantilever beams supported by discrete springs, which were included to represent the pins. A bi-linear, irreversible damage law was used to represent Z-fiber damage, the parameters of which were obtained from previous experiments. Closed-form solutions were developed for specimen compliance and displacements corresponding to Z-fiber row locations. A solution strategy was formulated to predict delamination growth, in which the parent laminate mode I fracture toughness was used as the criterion for delamination growth. The solution procedure was coded into FORTRAN 90, giving a dedicated software tool for performing the delamination prediction. Comparison of analysis results with previous analysis and experiment showed good agreement, yielding an initial verification for the analytical procedure.

  11. Linear chirp phase perturbing approach for finding binary phased codes

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.

  12. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  13. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    NASA Astrophysics Data System (ADS)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.

  14. Predictive Coding of Dynamical Variables in Balanced Spiking Networks

    PubMed Central

    Boerlin, Martin; Machens, Christian K.; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113

  15. Non-linear hydrodynamical evolution of rotating relativistic stars: numerical methods and code tests

    NASA Astrophysics Data System (ADS)

    Font, José A.; Stergioulas, Nikolaos; Kokkotas, Kostas D.

    2000-04-01

    We present numerical hydrodynamical evolutions of rapidly rotating relativistic stars, using an axisymmetric, non-linear relativistic hydrodynamics code. We use four different high-resolution shock-capturing (HRSC) finite-difference schemes (based on approximate Riemann solvers) and compare their accuracy in preserving uniformly rotating stationary initial configurations in long-term evolutions. Among these four schemes, we find that the third-order piecewise parabolic method scheme is superior in maintaining the initial rotation law in long-term evolutions, especially near the surface of the star. It is further shown that HRSC schemes are suitable for the evolution of perturbed neutron stars and for the accurate identification (via Fourier transforms) of normal modes of oscillation. This is demonstrated for radial and quadrupolar pulsations in the non-rotating limit, where we find good agreement with frequencies obtained with a linear perturbation code. The code can be used for studying small-amplitude or non-linear pulsations of differentially rotating neutron stars, while our present results serve as testbed computations for three-dimensional general-relativistic evolution codes.

  16. Incorporating Non-Linear Sorption into High Fidelity Subsurface Reactive Transport Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Rabideau, A. J.; Allen-King, R. M.

    2014-12-01

    A variety of studies, including multiple NRC (National Research Council) reports, have stressed the need for simulation models that can provide realistic predictions of contaminant behavior during the groundwater remediation process, most recently highlighting the specific technical challenges of "back diffusion and desorption in plume models". For a typically-sized remediation site, a minimum of about 70 million grid cells are required to achieve desired cm-level thickness among low-permeability lenses responsible for driving the back-diffusion phenomena. Such discretization is nearly three orders of magnitude more than is typically seen in modeling practice using public domain codes like RT3D (Reactive Transport in Three Dimensions). Consequently, various extensions have been made to the RT3D code to support efficient modeling of recently proposed dual-mode non-linear sorption processes (e.g. Polanyi with linear partitioning) at high-fidelity scales of grid resolution. These extensions have facilitated development of exploratory models in which contaminants are introduced into an aquifer via an extended multi-decade "release period" and allowed to migrate under natural conditions for centuries. These realistic simulations of contaminant loading and migration provide high fidelity representation of the underlying diffusion and sorption processes that control remediation. Coupling such models with decision support processes is expected to facilitate improved long-term management of complex remediation sites that have proven intractable to conventional remediation strategies.

  17. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  18. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    NASA Astrophysics Data System (ADS)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  19. Developing small-area predictions for smoking and obesity prevalence in the United States for use in Environmental Public Health Tracking.

    PubMed

    Ortega Hinojosa, Alberto M; Davies, Molly M; Jarjour, Sarah; Burnett, Richard T; Mann, Jennifer K; Hughes, Edward; Balmes, John R; Turner, Michelle C; Jerrett, Michael

    2014-10-01

    Globally and in the United States, smoking and obesity are leading causes of death and disability. Reliable estimates of prevalence for these risk factors are often missing variables in public health surveillance programs. This may limit the capacity of public health surveillance to target interventions or to assess associations between other environmental risk factors (e.g., air pollution) and health because smoking and obesity are often important confounders. To generate prevalence estimates of smoking and obesity rates over small areas for the United States (i.e., at the ZIP code and census tract levels). We predicted smoking and obesity prevalence using a combined approach first using a lasso-based variable selection procedure followed by a two-level random effects regression with a Poisson link clustered on state and county. We used data from the Behavioral Risk Factor Surveillance System (BRFSS) from 1991 to 2010 to estimate the model. We used 10-fold cross-validated mean squared errors and the variance of the residuals to test our model. To downscale the estimates we combined the prediction equations with 1990 and 2000 U.S. Census data for each of the four five-year time periods in this time range at the ZIP code and census tract levels. Several sensitivity analyses were conducted using models that included only basic terms, that accounted for spatial autocorrelation, and used Generalized Linear Models that did not include random effects. The two-level random effects model produced improved estimates compared to the fixed effects-only models. Estimates were particularly improved for the two-thirds of the conterminous U.S. where BRFSS data were available to estimate the county level random effects. We downscaled the smoking and obesity rate predictions to derive ZIP code and census tract estimates. To our knowledge these smoking and obesity predictions are the first to be developed for the entire conterminous U.S. for census tracts and ZIP codes. Our estimates could have significant utility for public health surveillance. Copyright © 2014. Published by Elsevier Inc.

  20. Experimental characterization of an ultra-fast Thomson scattering x-ray source with three-dimensional time and frequency-domain analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuba, J; Slaughter, D R; Fittinghoff, D N

    We present a detailed comparison of the measured characteristics of Thomson backscattered x-rays produced at the PLEIADES (Picosecond Laser-Electron Interaction for the Dynamic Evaluation of Structures) facility at Lawrence Livermore National Laboratory to predicted results from a newly developed, fully three-dimensional time and frequency-domain code. Based on the relativistic differential cross section, this code has the capability to calculate time and space dependent spectra of the x-ray photons produced from linear Thomson scattering for both bandwidth-limited and chirped incident laser pulses. Spectral broadening of the scattered x-ray pulse resulting from the incident laser bandwidth, perpendicular wave vector components in themore » laser focus, and the transverse and longitudinal phase space of the electron beam are included. Electron beam energy, energy spread, and transverse phase space measurements of the electron beam at the interaction point are presented, and the corresponding predicted x-ray characteristics are determined. In addition, time-integrated measurements of the x-rays produced from the interaction are presented, and shown to agree well with the simulations.« less

  1. Analysis of Ninety Degree Flexure Tests for Characterization of Composite Transverse Tensile Strength

    NASA Technical Reports Server (NTRS)

    OBrien, T. Kevin; Krueger, Ronald

    2001-01-01

    Finite element (FE) analysis was performed on 3-point and 4-point bending test configurations of ninety degree oriented glass-epoxy and graphite-epoxy composite beams to identify deviations from beam theory predictions. Both linear and geometric non-linear analyses were performed using the ABAQUS finite element code. The 3-point and 4-point bending specimens were first modeled with two-dimensional elements. Three-dimensional finite element models were then performed for selected 4-point bending configurations to study the stress distribution across the width of the specimens and compare the results to the stresses computed from two-dimensional plane strain and plane stress analyses and the stresses from beam theory. Stresses for all configurations were analyzed at load levels corresponding to the measured transverse tensile strength of the material.

  2. Error-Detecting Identification Codes for Algebra Students.

    ERIC Educational Resources Information Center

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  3. Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2010-01-01

    A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.

  4. A perturbative approach to the redshift space correlation function: beyond the Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We extend our previous redshift space power spectrum code to the redshift space correlation function. Here we focus on the Gaussian Streaming Model (GSM). Again, the code accommodates a wide range of modified gravity and dark energy models. For the non-linear real space correlation function used in the GSM we use the Fourier transform of the RegPT 1-loop matter power spectrum. We compare predictions of the GSM for a Vainshtein screened and Chameleon screened model as well as GR. These predictions are compared to the Fourier transform of the Taruya, Nishimichi and Saito (TNS) redshift space power spectrum model whichmore » is fit to N-body data. We find very good agreement between the Fourier transform of the TNS model and the GSM predictions, with ≤ 6% deviations in the first two correlation function multipoles for all models for redshift space separations in 50Mpc h ≤ s ≤ 180Mpc/ h . Excellent agreement is found in the differences between the modified gravity and GR multipole predictions for both approaches to the redshift space correlation function, highlighting their matched ability in picking up deviations from GR. We elucidate the timeliness of such non-standard templates at the dawn of stage-IV surveys and discuss necessary preparations and extensions needed for upcoming high quality data.« less

  5. Non-linear blend coding in the moth antennal lobe emerges from random glomerular networks

    PubMed Central

    Capurro, Alberto; Baroni, Fabiano; Olsson, Shannon B.; Kuebler, Linda S.; Karout, Salah; Hansson, Bill S.; Pearce, Timothy C.

    2012-01-01

    Neural responses to odor blends often exhibit non-linear interactions to blend components. The first olfactory processing center in insects, the antennal lobe (AL), exhibits a complex network connectivity. We attempt to determine if non-linear blend interactions can arise purely as a function of the AL network connectivity itself, without necessitating additional factors such as competitive ligand binding at the periphery or intrinsic cellular properties. To assess this, we compared blend interactions among responses from single neurons recorded intracellularly in the AL of the moth Manduca sexta with those generated using a population-based computational model constructed from the morphologically based connectivity pattern of projection neurons (PNs) and local interneurons (LNs) with randomized connection probabilities from which we excluded detailed intrinsic neuronal properties. The model accurately predicted most of the proportions of blend interaction types observed in the physiological data. Our simulations also indicate that input from LNs is important in establishing both the type of blend interaction and the nature of the neuronal response (excitation or inhibition) exhibited by AL neurons. For LNs, the only input that significantly impacted the blend interaction type was received from other LNs, while for PNs the input from olfactory sensory neurons and other PNs contributed agonistically with the LN input to shape the AL output. Our results demonstrate that non-linear blend interactions can be a natural consequence of AL connectivity, and highlight the importance of lateral inhibition as a key feature of blend coding to be addressed in future experimental and computational studies. PMID:22529799

  6. With or without you: predictive coding and Bayesian inference in the brain

    PubMed Central

    Aitchison, Laurence; Lengyel, Máté

    2018-01-01

    Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084

  7. Permanent draft genome sequence of Comamonas testosteroni KF-1

    PubMed Central

    Weiss, Michael; Kesberg, Anna I.; LaButti, Kurt M.; Pitluck, Sam; Bruce, David; Hauser, Loren; Copeland, Alex; Woyke, Tanja; Lowry, Stephen; Lucas, Susan; Land, Miriam; Goodwin, Lynne; Kjelleberg, Staffan; Cook, Alasdair M.; Buhmann, Matthias; Thomas, Torsten; Schleheck, David

    2013-01-01

    Comamonas testosteroni KF-1 is a model organism for the elucidation of the novel biochemical degradation pathways for xenobiotic 4-sulfophenylcarboxylates (SPC) formed during biodegradation of synthetic 4-sulfophenylalkane surfactants (linear alkylbenzenesulfonates, LAS) by bacterial communities. Here we describe the features of this organism, together with the complete genome sequence and annotation. The 6,026,527 bp long chromosome (one sequencing gap) exhibits an average G+C content of 61.79% and is predicted to encode 5,492 protein-coding genes and 114 RNA genes. PMID:23991256

  8. Nonlinear Transient Problems Using Structure Compatible Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2000-01-01

    The report documents the recent effort to enhance a transient linear heat transfer code so as to solve nonlinear problems. The linear heat transfer code was originally developed by Dr. Kim Bey of NASA Largely and called the Structure-Compatible Heat Transfer (SCHT) code. The report includes four parts. The first part outlines the formulation of the heat transfer problem of concern. The second and the third parts give detailed procedures to construct the nonlinear finite element equations and the required Jacobian matrices for the nonlinear iterative method, Newton-Raphson method. The final part summarizes the results of the numerical experiments on the newly enhanced SCHT code.

  9. Simulations of initial MHD experiments on the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    O'Connell, R.; Forest, C. B.; Goldwin, J. M.; Kendrick, R. D.; Canary, H. W.; Nornberg, M. D.; Jaun, A.

    1999-11-01

    Initial experiments for a liquid metal MHD device have been modelled using measurements from geometrically similar water experiments. In the low B limit the water flows are the same as sodium flows. Two codes have been written to predict 1) linear stability of the system and 2) the response of the system to an externally applied vertical magnetic field, using measured velocity profiles. Predictions are made for a first set of MHD experiments, including: a) demonstration of the distortion and amplification of externally applied magnetic fields by sheared flows, b) demonstration of the β-effect by measurement of the turbulent conductivity, c) demonstration of a turbulent α effect and d) characterization of magnetic eigenmodes.

  10. Investigations of flowfields found in typical combustor geometries

    NASA Technical Reports Server (NTRS)

    Lilley, D. G.

    1984-01-01

    Studies are concerned with experimental and theoretical research on 2-D axisymmetric geometries under low speed, nonreacting, turbulent, swirling flow conditions. The flow enters the test section and proceeds into a larger chamber (the linear expansion ratio D/d = 2, 1.5 and 1) via a sudden or gradual expansion (side wall angle alpha = 90 and 45 degrees). A weak or strong nozzle (of area ratio A/a = 2 and 4) may be positioned downstream at x/D = 2 to form a contraction exit to the test section. Inlet swirl vanes are adjustable to a variety of vane angles with values of theta = 0, 38, 45, 60 and 70 degrees being emphasized. The objective is to determine the effect of these parameters on isothermal flow field patterns, time mean velocities and turbulence quantities, and to establish an improved simulation in the form of a computer prediction code equipped with a suitable turbulence model. The goal of the on going research is to perform experiments and complementary computations with the idea of doing the necessary type of research that will yield improved calculation capability. This involves performing experiments where time mean turbulence quantities are measured and taking input conditions and running an existing prediction code for a variety of test cases so as to compare predictions against experiment.

  11. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  12. Hybrid finite element/waveguide mode analysis of passive RF devices

    NASA Astrophysics Data System (ADS)

    McGrath, Daniel T.

    1993-07-01

    A numerical solution for time-harmonic electromagnetic fields in two-port passive radio frequency (RF) devices has been developed, implemented in a computer code, and validated. Vector finite elements are used to represent the fields in the device interior, and field continuity across waveguide apertures is enforced by matching the interior solution to a sum of waveguide modes. Consequently, the mesh may end at the aperture instead of extending into the waveguide. The report discusses the variational formulation and its reduction to a linear system using Galerkin's method. It describes the computer code, including its interface to commercial CAD software used for geometry generation. It presents validation results for waveguide discontinuities, coaxial transitions, and microstrip circuits. They demonstrate that the method is an effective and versatile tool for predicting the performance of passive RF devices.

  13. Time domain simulation of nonlinear acoustic beams generated by rectangular pistons with application to harmonic imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xinmai; Cleveland, Robin O.

    2005-01-01

    A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging. .

  14. Time domain simulation of nonlinear acoustic beams generated by rectangular pistons with application to harmonic imaging.

    PubMed

    Yang, Xinmai; Cleveland, Robin O

    2005-01-01

    A time-domain numerical code (the so-called Texas code) that solves the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation has been extended from an axis-symmetric coordinate system to a three-dimensional (3D) Cartesian coordinate system. The code accounts for diffraction (in the parabolic approximation), nonlinearity and absorption and dispersion associated with thermoviscous and relaxation processes. The 3D time domain code was shown to be in agreement with benchmark solutions for circular and rectangular sources, focused and unfocused beams, and linear and nonlinear propagation. The 3D code was used to model the nonlinear propagation of diagnostic ultrasound pulses through tissue. The prediction of the second-harmonic field was sensitive to the choice of frequency-dependent absorption: a frequency squared f2 dependence produced a second-harmonic field which peaked closer to the transducer and had a lower amplitude than that computed for an f1.1 dependence. In comparing spatial maps of the harmonics we found that the second harmonic had dramatically reduced amplitude in the near field and also lower amplitude side lobes in the focal region than the fundamental. These findings were consistent for both uniform and apodized sources and could be contributing factors in the improved imaging reported with clinical scanners using tissue harmonic imaging.

  15. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  16. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  17. Estimation of Sonic Fatigue by Reduced-Order Finite Element Based Analyses

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2006-01-01

    A computationally efficient, reduced-order method is presented for prediction of sonic fatigue of structures exhibiting geometrically nonlinear response. A procedure to determine the nonlinear modal stiffness using commercial finite element codes allows the coupled nonlinear equations of motion in physical degrees of freedom to be transformed to a smaller coupled system of equations in modal coordinates. The nonlinear modal system is first solved using a computationally light equivalent linearization solution to determine if the structure responds to the applied loading in a nonlinear fashion. If so, a higher fidelity numerical simulation in modal coordinates is undertaken to more accurately determine the nonlinear response. Comparisons of displacement and stress response obtained from the reduced-order analyses are made with results obtained from numerical simulation in physical degrees-of-freedom. Fatigue life predictions from nonlinear modal and physical simulations are made using the rainflow cycle counting method in a linear cumulative damage analysis. Results computed for a simple beam structure under a random acoustic loading demonstrate the effectiveness of the approach and compare favorably with results obtained from the solution in physical degrees-of-freedom.

  18. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  19. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.

    PubMed

    Sokoloski, Sacha

    2017-09-01

    In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.

  20. Investigation of Fluctuation-Induced Electron Transport in Hall Thrusters with a 2D Hybrid Code in the Azimuthal and Axial Coordinates

    NASA Astrophysics Data System (ADS)

    Fernandez, Eduardo; Borelli, Noah; Cappelli, Mark; Gascon, Nicolas

    2003-10-01

    Most current Hall thruster simulation efforts employ either 1D (axial), or 2D (axial and radial) codes. These descriptions crucially depend on the use of an ad-hoc perpendicular electron mobility. Several models for the mobility are typically invoked: classical, Bohm, empirically based, wall-induced, as well as combinations of the above. Experimentally, it is observed that fluctuations and electron transport depend on axial distance and operating parameters. Theoretically, linear stability analyses have predicted a number of unstable modes; yet the nonlinear character of the fluctuations and/or their contribution to electron transport remains poorly understood. Motivated by these observations, a 2D code in the azimuthal and axial coordinates has been written. In particular, the simulation self-consistently calculates the azimuthal disturbances resulting in fluctuating drifts, which in turn (if properly correlated with plasma density disturbances) result in fluctuation-driven electron transport. The characterization of the turbulence at various operating parameters and across the channel length is also the object of this study. A description of the hybrid code used in the simulation as well as the initial results will be presented.

  1. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  2. Effects of Target Fragmentation on Evaluation of LET Spectra From Space Radiation in Low-Earth Orbit (LEO) Environment: Impact on SEU Predictions

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Cucinotta, F. A.; Badhwar, G. D.; ONeill, P. M.; Badavi, F. F.

    1995-01-01

    Recent improvements in the radiation transport code HZETRN/BRYNTRN and galactic cosmic ray environmental model have provided an opportunity to investigate the effects of target fragmentation on estimates of single event upset (SEU) rates for spacecraft memory devices. Since target fragments are mostly of very low energy, an SEU prediction model has been derived in terms of particle energy rather than linear energy transfer (LET) to account for nonlinear relationship between range and energy. Predictions are made for SEU rates observed on two Shuttle flights, each at low and high inclination orbit. Corrections due to track structure effects are made for both high energy ions with track structure larger than device sensitive volume and for low energy ions with dense track where charge recombination is important. Results indicate contributions from target fragments are relatively important at large shield depths (or any thick structure material) and at low inclination orbit. Consequently, a more consistent set of predictions for upset rates observed in these two flights is reached when compared to an earlier analysis with CREME model. It is also observed that the errors produced by assuming linear relationship in range and energy in the earlier analysis have fortuitously canceled out the errors for not considering target fragmentation and track structure effects.

  3. The complete mitochondrial genome of Hydra vulgaris (Hydroida: Hydridae).

    PubMed

    Pan, Hong-Chun; Fang, Hong-Yan; Li, Shi-Wei; Liu, Jun-Hong; Wang, Ying; Wang, An-Tai

    2014-12-01

    The complete mitochondrial genome of Hydra vulgaris (Hydroida: Hydridae) is composed of two linear DNA molecules. The mitochondrial DNA (mtDNA) molecule 1 is 8010 bp long and contains six protein-coding genes, large subunit rRNA, methionine and tryptophan tRNAs, two pseudogenes consisting respectively of a partial copy of COI, and terminal sequences at two ends of the linear mtDNA, while the mtDNA molecule 2 is 7576 bp long and contains seven protein-coding genes, small subunit rRNA, methionine tRNA, a pseudogene consisting of a partial copy of COI and terminal sequences at two ends of the linear mtDNA. COI gene begins with GTG as start codon, whereas other 12 protein-coding genes start with a typical ATG initiation codon. In addition, all protein-coding genes are terminated with TAA as stop codon.

  4. The weight hierarchies and chain condition of a class of codes from varieties over finite fields

    NASA Technical Reports Server (NTRS)

    Wu, Xinen; Feng, Gui-Liang; Rao, T. R. N.

    1996-01-01

    The generalized Hamming weights of linear codes were first introduced by Wei. These are fundamental parameters related to the minimal overlap structures of the subcodes and very useful in several fields. It was found that the chain condition of a linear code is convenient in studying the generalized Hamming weights of the product codes. In this paper we consider a class of codes defined over some varieties in projective spaces over finite fields, whose generalized Hamming weights can be determined by studying the orbits of subspaces of the projective spaces under the actions of classical groups over finite fields, i.e., the symplectic groups, the unitary groups and orthogonal groups. We give the weight hierarchies and generalized weight spectra of the codes from Hermitian varieties and prove that the codes satisfy the chain condition.

  5. Transport modeling of convection dominated helicon discharges in Proto-MPEX with the B2.5-Eirene code

    NASA Astrophysics Data System (ADS)

    Owen, L. W.; Rapp, J.; Canik, J.; Lore, J. D.

    2017-11-01

    Data-constrained interpretative analyses of plasma transport in convection dominated helicon discharges in the Proto-MPEX linear device, and predictive calculations with additional Electron Cyclotron Heating/Electron Bernstein Wave (ECH/EBW) heating, are reported. The B2.5-Eirene code, in which the multi-fluid plasma code B2.5 is coupled to the kinetic Monte Carlo neutrals code Eirene, is used to fit double Langmuir probe measurements and fast camera data in front of a stainless-steel target. The absorbed helicon and ECH power (11 kW) and spatially constant anomalous transport coefficients that are deduced from fitting of the probe and optical data are additionally used for predictive simulations of complete axial distributions of the densities, temperatures, plasma flow velocities, particle and energy fluxes, and possible effects of alternate fueling and pumping scenarios. The somewhat hollow electron density and temperature radial profiles from the probe data suggest that Trivelpiece-Gould wave absorption is the dominant helicon electron heating source in the discharges analyzed here. There is no external ion heating, but the corresponding calculated ion temperature radial profile is not hollow. Rather it reflects ion heating by the electron-ion equilibration terms in the energy balance equations and ion radial transport resulting from the hollow density profile. With the absorbed power and the transport model deduced from fitting the sheath limited discharge data, calculated conduction limited higher recycling conditions were produced by reducing the pumping and increasing the gas fueling rate, resulting in an approximate doubling of the target ion flux and reduction of the target heat flux.

  6. Tail Biting Trellis Representation of Codes: Decoding and Construction

    NASA Technical Reports Server (NTRS)

    Shao. Rose Y.; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents two new iterative algorithms for decoding linear codes based on their tail biting trellises, one is unidirectional and the other is bidirectional. Both algorithms are computationally efficient and achieves virtually optimum error performance with a small number of decoding iterations. They outperform all the previous suboptimal decoding algorithms. The bidirectional algorithm also reduces decoding delay. Also presented in the paper is a method for constructing tail biting trellises for linear block codes.

  7. Turbine blade forced response prediction using FREPS

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha, V.; Morel, Michael R.

    1993-01-01

    This paper describes a software system called FREPS (Forced REsponse Prediction System) that integrates structural dynamic, steady and unsteady aerodynamic analyses to efficiently predict the forced response dynamic stresses in axial flow turbomachinery blades due to aerodynamic and mechanical excitations. A flutter analysis capability is also incorporated into the system. The FREPS system performs aeroelastic analysis by modeling the motion of the blade in terms of its normal modes. The structural dynamic analysis is performed by a finite element code such as MSC/NASTRAN. The steady aerodynamic analysis is based on nonlinear potential theory and the unsteady aerodynamic analyses is based on the linearization of the non-uniform potential flow mean. The program description and presentation of the capabilities are reported herein. The effectiveness of the FREPS package is demonstrated on the High Pressure Oxygen Turbopump turbine of the Space Shuttle Main Engine. Both flutter and forced response analyses are performed and typical results are illustrated.

  8. Fast, scalable prediction of deleterious noncoding variants from functional and population genomic data.

    PubMed

    Huang, Yi-Fei; Gulko, Brad; Siepel, Adam

    2017-04-01

    Many genetic variants that influence phenotypes of interest are located outside of protein-coding genes, yet existing methods for identifying such variants have poor predictive power. Here we introduce a new computational method, called LINSIGHT, that substantially improves the prediction of noncoding nucleotide sites at which mutations are likely to have deleterious fitness consequences, and which, therefore, are likely to be phenotypically important. LINSIGHT combines a generalized linear model for functional genomic data with a probabilistic model of molecular evolution. The method is fast and highly scalable, enabling it to exploit the 'big data' available in modern genomics. We show that LINSIGHT outperforms the best available methods in identifying human noncoding variants associated with inherited diseases. In addition, we apply LINSIGHT to an atlas of human enhancers and show that the fitness consequences at enhancers depend on cell type, tissue specificity, and constraints at associated promoters.

  9. Fast Exact Search in Hamming Space With Multi-Index Hashing.

    PubMed

    Norouzi, Mohammad; Punjani, Ali; Fleet, David J

    2014-06-01

    There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits.

  10. Sparse generalized linear model with L0 approximation for feature selection and prediction with big omics data.

    PubMed

    Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P

    2017-01-01

    Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.

  11. Collisionless damping of flows in the TJ-II stellarator

    NASA Astrophysics Data System (ADS)

    Sánchez, E.; Kleiber, R.; Hatzky, R.; Borchardt, M.; Monreal, P.; Castejón, F.; López-Fraguas, A.; Sáez, X.; Velasco, J. L.; Calvo, I.; Alonso, A.; López-Bruna, D.

    2013-01-01

    The results of global linear gyrokinetic simulations of residual flows carried out with the code EUTERPE in the TJ-II three-dimensional geometry are reported. The linear response of the plasma to potential perturbations homogeneous in a magnetic surface shows several oscillation frequencies: a Geodesic-acoustic-mode-like frequency, in qualitative agreement with the formula given by Sugama and Watanabe (2006 Plasma Phys. 72 825), and a much lower frequency oscillation in agreement with the predictions of Mishchenko et al (2008 Phys. Plasmas 15 072309) and Helander et al (2011 Plasma Phys. Control. Fusion 53 054006) for stellarators. The dependence of both oscillations on ion and electron temperatures and the magnetic configuration is studied. The low-frequency oscillations are in the frequency range supporting the long-range correlations between potential signals experimentally observed in TJ-II.

  12. Emittance Growth in the DARHT-II Linear Induction Accelerator

    DOE PAGES

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; ...

    2017-10-03

    The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less

  13. Emittance Growth in the DARHT-II Linear Induction Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.

    The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less

  14. Assessment of Current Jet Noise Prediction Capabilities

    NASA Technical Reports Server (NTRS)

    Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas

    2008-01-01

    An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.

  15. Matrix-Free Polynomial-Based Nonlinear Least Squares Optimized Preconditioning and its Application to Discontinuous Galerkin Discretizations of the Euler Equations

    DTIC Science & Technology

    2015-06-01

    cient parallel code for applying the operator. Our method constructs a polynomial preconditioner using a nonlinear least squares (NLLS) algorithm. We show...apply the underlying operator. Such a preconditioner can be very attractive in scenarios where one has a highly efficient parallel code for applying...repeatedly solve a large system of linear equations where one has an extremely fast parallel code for applying an underlying fixed linear operator

  16. A Very Fast and Angular Momentum Conserving Tree Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcello, Dominic C., E-mail: dmarce504@gmail.com

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  17. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  18. Toward Optimal Manifold Hashing via Discrete Locally Linear Embedding.

    PubMed

    Rongrong Ji; Hong Liu; Liujuan Cao; Di Liu; Yongjian Wu; Feiyue Huang

    2017-11-01

    Binary code learning, also known as hashing, has received increasing attention in large-scale visual search. By transforming high-dimensional features to binary codes, the original Euclidean distance is approximated via Hamming distance. More recently, it is advocated that it is the manifold distance, rather than the Euclidean distance, that should be preserved in the Hamming space. However, it retains as an open problem to directly preserve the manifold structure by hashing. In particular, it first needs to build the local linear embedding in the original feature space, and then quantize such embedding to binary codes. Such a two-step coding is problematic and less optimized. Besides, the off-line learning is extremely time and memory consuming, which needs to calculate the similarity matrix of the original data. In this paper, we propose a novel hashing algorithm, termed discrete locality linear embedding hashing (DLLH), which well addresses the above challenges. The DLLH directly reconstructs the manifold structure in the Hamming space, which learns optimal hash codes to maintain the local linear relationship of data points. To learn discrete locally linear embeddingcodes, we further propose a discrete optimization algorithm with an iterative parameters updating scheme. Moreover, an anchor-based acceleration scheme, termed Anchor-DLLH, is further introduced, which approximates the large similarity matrix by the product of two low-rank matrices. Experimental results on three widely used benchmark data sets, i.e., CIFAR10, NUS-WIDE, and YouTube Face, have shown superior performance of the proposed DLLH over the state-of-the-art approaches.

  19. Nonlinear dynamics of laser systems with elements of a chaos: Advanced computational code

    NASA Astrophysics Data System (ADS)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Kuznetsova, A. A.; Buyadzhi, A. A.; Prepelitsa, G. P.; Ternovsky, V. B.

    2017-10-01

    A general, uniform chaos-geometric computational approach to analysis, modelling and prediction of the non-linear dynamics of quantum and laser systems (laser and quantum generators system etc) with elements of the deterministic chaos is briefly presented. The approach is based on using the advanced generalized techniques such as the wavelet analysis, multi-fractal formalism, mutual information approach, correlation integral analysis, false nearest neighbour algorithm, the Lyapunov’s exponents analysis, and surrogate data method, prediction models etc There are firstly presented the numerical data on the topological and dynamical invariants (in particular, the correlation, embedding, Kaplan-York dimensions, the Lyapunov’s exponents, Kolmogorov’s entropy and other parameters) for laser system (the semiconductor GaAs/GaAlAs laser with a retarded feedback) dynamics in a chaotic and hyperchaotic regimes.

  20. Simulations of linear and Hamming codes using SageMath

    NASA Astrophysics Data System (ADS)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lougovski, P.; Uskov, D. B.

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  2. Aeronautical audio broadcasting via satellite

    NASA Technical Reports Server (NTRS)

    Tzeng, Forrest F.

    1993-01-01

    A system design for aeronautical audio broadcasting, with C-band uplink and L-band downlink, via Inmarsat space segments is presented. Near-transparent-quality compression of 5-kHz bandwidth audio at 20.5 kbit/s is achieved based on a hybrid technique employing linear predictive modeling and transform-domain residual quantization. Concatenated Reed-Solomon/convolutional codes with quadrature phase shift keying are selected for bandwidth and power efficiency. RF bandwidth at 25 kHz per channel, and a decoded bit error rate at 10(exp -6) with E(sub b)/N(sub o) at 3.75 dB are obtained. An interleaver, scrambler, modem synchronization, and frame format were designed, and frequency-division multiple access was selected over code-division multiple access. A link budget computation based on a worst-case scenario indicates sufficient system power margins. Transponder occupancy analysis for 72 audio channels demonstrates ample remaining capacity to accommodate emerging aeronautical services.

  3. Computational fluid dynamic modelling of cavitation

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Models in sheet cavitation in cryogenic fluids are developed for use in Euler and Navier-Stokes codes. The models are based upon earlier potential-flow models but enable the cavity inception point, length, and shape to be determined as part of the computation. In the present paper, numerical solutions are compared with experimental measurements for both pressure distribution and cavity length. Comparisons between models are also presented. The CFD model provides a relatively simple modification to an existing code to enable cavitation performance predictions to be included. The analysis also has the added ability of incorporating thermodynamic effects of cryogenic fluids into the analysis. Extensions of the current two-dimensional steady state analysis to three-dimensions and/or time-dependent flows are, in principle, straightforward although geometrical issues become more complicated. Linearized models, however offer promise of providing effective cavitation modeling in three-dimensions. This analysis presents good potential for improved understanding of many phenomena associated with cavity flows.

  4. Design of Linear Accelerator (LINAC) tanks for proton therapy via Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellano, T.; De Palma, L.; Laneve, D.

    2015-07-01

    A homemade computer code for designing a Side- Coupled Linear Accelerator (SCL) is written. It integrates a simplified model of SCL tanks with the Particle Swarm Optimization (PSO) algorithm. The computer code main aim is to obtain useful guidelines for the design of Linear Accelerator (LINAC) resonant cavities. The design procedure, assisted via the aforesaid approach seems very promising, allowing future improvements towards the optimization of actual accelerating geometries. (authors)

  5. Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, B. S.

    1972-01-01

    A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.

  6. Low-delay predictive audio coding for the HIVITS HDTV codec

    NASA Astrophysics Data System (ADS)

    McParland, A. K.; Gilchrist, N. H. C.

    1995-01-01

    The status of work relating to predictive audio coding, as part of the European project on High Quality Video Telephone and HD(TV) Systems (HIVITS), is reported. The predictive coding algorithm is developed, along with six-channel audio coding and decoding hardware. Demonstrations of the audio codec operating in conjunction with the video codec, are given.

  7. Overview of Recent Radiation Transport Code Comparisons for Space Applications

    NASA Astrophysics Data System (ADS)

    Townsend, Lawrence

    Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.

  8. A simplified building airflow model for agent concentration prediction.

    PubMed

    Jacques, David R; Smith, David A

    2010-11-01

    A simplified building airflow model is presented that can be used to predict the spread of a contaminant agent from a chemical or biological attack. If the dominant means of agent transport throughout the building is an air-handling system operating at steady-state, a linear time-invariant (LTI) model can be constructed to predict the concentration in any room of the building as a result of either an internal or external release. While the model does not capture weather-driven and other temperature-driven effects, it is suitable for concentration predictions under average daily conditions. The model is easily constructed using information that should be accessible to a building manager, supplemented with assumptions based on building codes and standard air-handling system design practices. The results of the model are compared with a popular multi-zone model for a simple building and are demonstrated for building examples containing one or more air-handling systems. The model can be used for rapid concentration prediction to support low-cost placement strategies for chemical and biological detection sensors.

  9. Mapping chemical structure-activity information of HAART-drug cocktails over complex networks of AIDS epidemiology and socioeconomic data of U.S. counties.

    PubMed

    Herrera-Ibatá, Diana María; Pazos, Alejandro; Orbegozo-Medina, Ricardo Alfredo; Romero-Durán, Francisco Javier; González-Díaz, Humberto

    2015-06-01

    Using computational algorithms to design tailored drug cocktails for highly active antiretroviral therapy (HAART) on specific populations is a goal of major importance for both pharmaceutical industry and public health policy institutions. New combinations of compounds need to be predicted in order to design HAART cocktails. On the one hand, there are the biomolecular factors related to the drugs in the cocktail (experimental measure, chemical structure, drug target, assay organisms, etc.); on the other hand, there are the socioeconomic factors of the specific population (income inequalities, employment levels, fiscal pressure, education, migration, population structure, etc.) to study the relationship between the socioeconomic status and the disease. In this context, machine learning algorithms, able to seek models for problems with multi-source data, have to be used. In this work, the first artificial neural network (ANN) model is proposed for the prediction of HAART cocktails, to halt AIDS on epidemic networks of U.S. counties using information indices that codify both biomolecular and several socioeconomic factors. The data was obtained from at least three major sources. The first dataset included assays of anti-HIV chemical compounds released to ChEMBL. The second dataset is the AIDSVu database of Emory University. AIDSVu compiled AIDS prevalence for >2300 U.S. counties. The third data set included socioeconomic data from the U.S. Census Bureau. Three scales or levels were employed to group the counties according to the location or population structure codes: state, rural urban continuum code (RUCC) and urban influence code (UIC). An analysis of >130,000 pairs (network links) was performed, corresponding to AIDS prevalence in 2310 counties in U.S. vs. drug cocktails made up of combinations of ChEMBL results for 21,582 unique drugs, 9 viral or human protein targets, 4856 protocols, and 10 possible experimental measures. The best model found with the original data was a linear neural network (LNN) with AUROC>0.80 and accuracy, specificity, and sensitivity≈77% in training and external validation series. The change of the spatial and population structure scale (State, UIC, or RUCC codes) does not affect the quality of the model. Unbalance was detected in all the models found comparing positive/negative cases and linear/non-linear model accuracy ratios. Using synthetic minority over-sampling technique (SMOTE), data pre-processing and machine-learning algorithms implemented into the WEKA software, more balanced models were found. In particular, a multilayer perceptron (MLP) with AUROC=97.4% and precision, recall, and F-measure >90% was found. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2015-11-01

    FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.

  11. Gyrokinetic modeling of impurity peaking in JET H-mode plasmas

    NASA Astrophysics Data System (ADS)

    Manas, P.; Camenen, Y.; Benkadda, S.; Weisen, H.; Angioni, C.; Casson, F. J.; Giroud, C.; Gelfusa, M.; Maslov, M.

    2017-06-01

    Quantitative comparisons are presented between gyrokinetic simulations and experimental values of the carbon impurity peaking factor in a database of JET H-modes during the carbon wall era. These plasmas feature strong NBI heating and hence high values of toroidal rotation and corresponding gradient. Furthermore, the carbon profiles present particularly interesting shapes for fusion devices, i.e., hollow in the core and peaked near the edge. Dependencies of the experimental carbon peaking factor ( R / L nC ) on plasma parameters are investigated via multilinear regressions. A marked correlation between R / L nC and the normalised toroidal rotation gradient is observed in the core, which suggests an important role of the rotation in establishing hollow carbon profiles. The carbon peaking factor is then computed with the gyrokinetic code GKW, using a quasi-linear approach, supported by a few non-linear simulations. The comparison of the quasi-linear predictions to the experimental values at mid-radius reveals two main regimes. At low normalised collisionality, ν * , and T e / T i < 1 , the gyrokinetic simulations quantitatively recover experimental carbon density profiles, provided that rotodiffusion is taken into account. In contrast, at higher ν * and T e / T i > 1 , the very hollow experimental carbon density profiles are never predicted by the simulations and the carbon density peaking is systematically over estimated. This points to a possible missing ingredient in this regime.

  12. Dispersion interactions in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Mostofi, Arash

    2012-02-01

    Semilocal functionals in Density Functional Theory (DFT) achieve high accuracy simulating a wide range of systems, but miss the effect of dispersion (vdW) interactions, important in weakly bound systems. We study two different methods to include vdW in DFT: First, we investigate a recent approach [1] to evaluate the vdW contribution to the total energy using maximally-localized Wannier functions. Using a set of simple dimers, we show that it has a number of shortcomings that hamper its predictive power; we then develop and implement a series of improvements [2] and obtain binding energies and equilibrium geometries in closer agreement to quantum-chemical coupled-cluster calculations. Second, we implement the vdW-DF functional [3], using Soler's method [4], within ONETEP [5], a linear-scaling DFT code, and apply it to a range of systems. This method within a linear-scaling DFT code allows the simulation of weakly bound systems of larger scale, such as organic/inorganic interfaces, biological systems and implicit solvation models. [1] P. Silvestrelli, JPC A 113, 5224 (2009). [2] L. Andrinopoulos et al, JCP 135, 154105 (2011). [3] M. Dion et al, PRL 92, 246401 (2004). [4] G. Rom'an-P'erez, J.M. Soler, PRL 103, 096102 (2009). [5] C. Skylaris et al, JCP 122, 084119 (2005).

  13. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).

  14. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  15. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  16. Reynolds-averaged Navier-Stokes computation on tip clearance flow in a compressor cascade using an unstructured grid

    NASA Astrophysics Data System (ADS)

    Shin, Sangmook

    2001-07-01

    A three-dimensional unstructured incompressible RANS code has been developed using artificial compressibility and Spalart-Allmaras eddy viscosity model. A node-based finite volume method is used in which all flow variables are defined at the vertices of tetrahedrons in an unstructured grid. The inviscid fluxes are computed by using the Roe's flux difference splitting method, and higher order accuracy is attained by data reconstruction based on Taylor series expansion. Gauss theorem is used to formulate necessary gradients. For time integration, an implicit scheme based on linearized Euler backward method is used. A tetrahedral unstructured grid generation code has been also developed and applied to the tip clearance flow in a highly staggered cascade. Surface grids are first generated in the flow passage and blade tip by using several triangulation methods including Delaunay triangulation, advancing front method and advancing layer method. Then the whole computational domain including tip gap region is filled with prisms using the surface grids. The code has been validated by comparisons with available computational and experimental results for several test cases: inviscid flow around NACA section, laminar and turbulent flow over a flat plate, turbulent flow through double-circular arc cascade and laminar flow through a square duct with 90° bend. Finally the code is applied to a linear cascade that has GE rotor B section with tip clearance and a high stagger angle of 56.9°. The overall structure of the tip clearance flow is well predicted. Loss of loading due to tip leakage flow and reloading due to tip leakage vortex are presented. On the end wall, separation line of the tip leakage vortex and reattachment line of passage vortex are identified. Prediction of such an interaction presents a challenge to RANS computations. The effects of blade span on the flow structure have been also investigated. Two cascades with blades of aspect ratios of 0.5 and 1.0 are considered. By comparing pressure distributions on the blade, it is shown that the aspect ratio has strong effects on loading distribution on the blade although the tip gap height is very small (0.016 chord). Grid convergence study has been carried out with three different grids for pressure distributions and limiting streamlines on the end wall. (Abstract shortened by UMI.)

  17. Bit selection using field drilling data and mathematical investigation

    NASA Astrophysics Data System (ADS)

    Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.

    2018-03-01

    A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.

  18. Simulating the effects of stellarator geometry on gyrokinetic drift-wave turbulence

    NASA Astrophysics Data System (ADS)

    Baumgaertel, Jessica Ann

    Nuclear fusion is a clean, safe form of energy with abundant fuel. In magnetic fusion energy (MFE) experiments, the plasma fuel is confined by magnetic fields at very high temperatures and densities. One fusion reactor design is the non-axisymmetric, torus-shaped stellarator. Its fully-3D fields have advantages over the simpler, better-understood axisymmetric tokamak, including the ability to optimize magnetic configurations for desired properties, such as lower transport (longer confinement time). Turbulence in the plasma can break MFE confinement. While turbulent transport is known to cause a significant amount of heat loss in tokamaks, it is a new area of research in stellarators. Gyrokinetics is a good mathematical model of the drift-wave instabilities that cause turbulence. Multiple gyrokinetic turbulence codes that had great success comparing to tokamak experiments are being converted for use with stellarator geometry. This thesis describes such adaptations of the gyrokinetic turbulence code, GS2. Herein a new computational grid generator and upgrades to GS2 itself are described, tested, and benchmarked against three other gyrokinetic codes. Using GS2, detailed linear studies using the National Compact Stellarator Experiment (NCSX) geometry were conducted. The first compares stability in two equilibria with different β=(plasma pressure)/(magnetic pressure). Overall, the higher β case was more stable than the lower β case. As high β is important for MFE experiments, this is encouraging. The second compares NCSX linear stability to a tokamak case. NCSX was more stable with a 20% higher critical temperature gradient normalized by the minor radius, suggesting that the fusion power might be enhanced by ˜ 50%. In addition, the first nonlinear, non-axisymmetric GS2 simulations are presented. Finally, linear stability of two locations in a W7-AS plasma were compared. The experimentally-measured parameters used were from a W7-AS shot in which measured heat fluxes match neoclassical theory predictions at inner radii, but are too large for neoclassical predictions at outer radii. Results from GS2 linear simulations show that the outer location has higher gyrokinetic instability growth rates than at the inner one. Mixing-length estimates of the heat flux are within a factor of 3 of the experimental measurements, indicating that gyrokinetic turbulence may be responsible for the higher transport measured by the experiment in the outer regions. Future nonlinear simulations can explore this question in more detail. This work is supported by the Princeton Plasma Physics Laboratory, which is operated by Princeton University for the U.S. Department of Energy under Contract No. DE-AC02-09CH11466, and the SciDAC Center for the Study of Plasma Microturbulence.

  19. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  20. Newtonian CAFE: a new ideal MHD code to study the solar atmosphere

    NASA Astrophysics Data System (ADS)

    González, J. J.; Guzmán, F.

    2015-12-01

    In this work we present a new independent code designed to solve the equations of classical ideal magnetohydrodynamics (MHD) in three dimensions, submitted to a constant gravitational field. The purpose of the code centers on the analysis of solar phenomena within the photosphere-corona region. In special the code is capable to simulate the propagation of impulsively generated linear and non-linear MHD waves in the non-isothermal solar atmosphere. We present 1D and 2D standard tests to demonstrate the quality of the numerical results obtained with our code. As 3D tests we present the propagation of MHD-gravity waves and vortices in the solar atmosphere. The code is based on high-resolution shock-capturing methods, uses the HLLE flux formula combined with Minmod, MC and WENO5 reconstructors. The divergence free magnetic field constraint is controlled using the Flux Constrained Transport method.

  1. Prediction of Turbulence-Generated Noise in Unheated Jets. Part 1; JeNo Technical Manual (Version 1.0)

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Georgiadis, Nicholas

    2005-01-01

    The model-based approach, used by the JeNo code to predict jet noise spectral directivity, is described. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. A Reynolds-averaged Navier-Stokes (RANS) solution yields the required mean flow for the solution of the propagation Green s function in a locally parallel flow. The RANS solution also produces time- and length-scales needed to model the non-compact source, the turbulent velocity correlation tensor, with exponential temporal and spatial functions. It is shown that while an exact non-causal Green s function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at low to moderate Mach numbers, the polar directivity of radiated sound is not entirely captured by this Green s function at high subsonic and supersonic acoustic Mach numbers. Results presented for unheated jets in the Mach number range of 0.51 to 1.8 suggest that near the peak radiation angle of high-speed jets, a different source/Green s function convolution integral may be required in order to capture the peak observed directivity of jet noise. A sample Mach 0.90 heated jet is also discussed that highlights the requirements for a comprehensive jet noise prediction model.

  2. Exploring the Effects of Congruence and Holland's Personality Codes on Job Satisfaction: An Application of Hierarchical Linear Modeling Techniques

    ERIC Educational Resources Information Center

    Ishitani, Terry T.

    2010-01-01

    This study applied hierarchical linear modeling to investigate the effect of congruence on intrinsic and extrinsic aspects of job satisfaction. Particular focus was given to differences in job satisfaction by gender and by Holland's first-letter codes. The study sample included nationally represented 1462 female and 1280 male college graduates who…

  3. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  4. Structural Affects on the Slamming Pressures of High-Speed Planing Craft

    NASA Astrophysics Data System (ADS)

    Ikeda, Christine; Taravella, Brandon; Judge, Carolyn

    2015-11-01

    High-speed planing craft are subjected to repeated slamming events in waves that can be very extreme depending on the wave topography, impact angle of the ship, forward speed of the ship, encounter angle, and height out of the water. The current work examines this fluid-structure interaction problem through the use of wedge drop experiments and a CFD code. In the first set of experiments, a rigid 20-degree deadrise angle wedge was dropped from a range of heights (0 <= H <= 0 . 6 m) and while pressures and accelerations of the slam even were measured. The second set of experiments involved a flexible-bottom 15-degree deadrise angle wedge that was dropped from from the same range of heights. In these second experiments, the pressures, accelerations, and strain field were measured. Both experiments are compared with a non-linear boundary value flat cylinder theory code in order to compare the pressure loading. The code assumes a rigid structure, therefore, the results between the code and the first experiment are in good agreement. The second experiment shows pressure magnitudes that are lower than the predictions due to the energy required to deform the structure. Funding from University of New Orleans Office of Research and Sponsored Programs and the Office of Naval Research.

  5. Comparison of Code Predictions to Test Measurements for Two Orifice Compensated Hydrostatic Bearings at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Keba, John E.

    1996-01-01

    Rotordynamic coefficients obtained from testing two different hydrostatic bearings are compared to values predicted by two different computer programs. The first set of test data is from a relatively long (L/D=1) orifice compensated hydrostatic bearing tested in water by Texas A&M University (TAMU Bearing No.9). The second bearing is a shorter (L/D=.37) bearing and was tested in a lower viscosity fluid by Rocketdyne Division of Rockwell (Rocketdyne 'Generic' Bearing) at similar rotating speeds and pressures. Computed predictions of bearing rotordynamic coefficients were obtained from the cylindrical seal code 'ICYL', one of the industrial seal codes developed for NASA-LeRC by Mechanical Technology Inc., and from the hydrodynamic bearing code 'HYDROPAD'. The comparison highlights the difference the bearing has on the accuracy of the predictions. The TAMU Bearing No. 9 test data is closely matched by the predictions obtained for the HYDROPAD code (except for added mass terms) whereas significant differences exist between the data from the Rocketdyne 'Generic' bearing the code predictions. The results suggest that some aspects of the fluid behavior in the shorter, higher Reynolds Number 'Generic' bearing may not be modeled accurately in the codes. The ICYL code predictions for flowrate and direct stiffness approximately equal those of HYDROPAD. Significant differences in cross-coupled stiffness and the damping terms were obtained relative to HYDROPAD and both sets of test data. Several observations are included concerning application of the ICYL code.

  6. Theory of Mind: A Neural Prediction Problem

    PubMed Central

    Koster-Hale, Jorie; Saxe, Rebecca

    2014-01-01

    Predictive coding posits that neural systems make forward-looking predictions about incoming information. Neural signals contain information not about the currently perceived stimulus, but about the difference between the observed and the predicted stimulus. We propose to extend the predictive coding framework from high-level sensory processing to the more abstract domain of theory of mind; that is, to inferences about others’ goals, thoughts, and personalities. We review evidence that, across brain regions, neural responses to depictions of human behavior, from biological motion to trait descriptions, exhibit a key signature of predictive coding: reduced activity to predictable stimuli. We discuss how future experiments could distinguish predictive coding from alternative explanations of this response profile. This framework may provide an important new window on the neural computations underlying theory of mind. PMID:24012000

  7. Toroidal Rotation and 3D Nonlinear Dynamics in the Peeling-Ballooning Model of ELMs

    NASA Astrophysics Data System (ADS)

    Snyder, P. B.

    2004-11-01

    Maximizing the height of the edge transport barrier (or ``pedestal'') while maintaining acceptably small edge localized modes (ELMs) is a critical issue for tokamak performance. The peeling-ballooning model proposes that intermediate wavelength MHD instabilities are responsible for ELMs and impose constraints on the pedestal. Recent studies of linear peeling-ballooning stability have found encouraging agreement with observations [e.g. 1]. To allow more detailed prediction of mode characteristics, including eventually predictions of the ELM energy loss and its deposition, we consider effects of sheared toroidal rotation, as well as 3D nonlinear dynamics. An eigenmode formulation for toroidal rotation shear is developed and incorporated into the framework of the ELITE stability code [2], resolving the low rotation discontinuity in previous high-n results. Rotation shear is found to impact the structure of peeling-ballooning modes, causing radial narrowing and mode shearing. The calculated mode frequency is found to agree with observed rotation in the edge region in the early stages of the ELM crash. Nonlinear studies with the 3D BOUT and NIMROD codes reveal detailed characteristics of the early evolution of these edge instabilities, including the impact of non-ideal effects. The expected linear growth phase is followed by a fast crash event in which poloidally narrow, filamentary structures propagate radially outward from the pedestal region, closely resembling observed ELM events. Comparisons with ELM observations will be discussed. \\vspace0.25em [1] P.B. Snyder et al., Nucl. Fusion 44, 320 (2004); P.B. Snyder et al., Phys. Plasmas 9, 2037 (2002). [2] H.R. Wilson et al., Phys. Plasmas 9, 1277 (2002).

  8. Prediction of plant lncRNA by ensemble machine learning classifiers.

    PubMed

    Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian

    2018-05-02

    In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.

  9. A novel encoding scheme for effective biometric discretization: Linearly Separable Subcode.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2013-02-01

    Separability in a code is crucial in guaranteeing a decent Hamming-distance separation among the codewords. In multibit biometric discretization where a code is used for quantization-intervals labeling, separability is necessary for preserving distance dissimilarity when feature components are mapped from a discrete space to a Hamming space. In this paper, we examine separability of Binary Reflected Gray Code (BRGC) encoding and reveal its inadequacy in tackling interclass variation during the discrete-to-binary mapping, leading to a tradeoff between classification performance and entropy of binary output. To overcome this drawback, we put forward two encoding schemes exhibiting full-ideal and near-ideal separability capabilities, known as Linearly Separable Subcode (LSSC) and Partially Linearly Separable Subcode (PLSSC), respectively. These encoding schemes convert the conventional entropy-performance tradeoff into an entropy-redundancy tradeoff in the increase of code length. Extensive experimental results vindicate the superiority of our schemes over the existing encoding schemes in discretization performance. This opens up possibilities of achieving much greater classification performance with high output entropy.

  10. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  11. Molecular dynamics simulations of high energy cascade in ordered alloys: Defect production and subcascade division

    NASA Astrophysics Data System (ADS)

    Crocombette, Jean-Paul; Van Brutzel, Laurent; Simeone, David; Luneville, Laurence

    2016-06-01

    Displacement cascades have been calculated in two ordered alloys (Ni3Al and UO2) in the molecular dynamics framework using the CMDC (Cell Molecular Dynamics for Cascade) code (J.-P. Crocombette and T. Jourdan, Nucl. Instrum. Meth. B 352, 9 (2015)) for energies ranking between 0.1 and 580 keV. The defect production has been compared to the prediction of the NRT (Norgett, Robinson and Torrens) standard. One observes a decrease with energy of the number of defects compared to the NRT prediction at intermediate energies but, unlike what is commonly observed in elemental solids, the number of produced defects does not always turn to a linear variation with ballistic energy at high energies. The fragmentation of the cascade into subcascades has been studied through the analysis of surviving defect pockets. It appears that the common knowledge equivalence of linearity of defect production and subcascades division does not hold in general for alloys. We calculate the average number of subcascades and average number of defects per subcascades as a function of ballistic energy. We find an unexpected variety of behaviors for these two average quantities above the threshold for subcascade formation.

  12. Accessible, almost ab initio multi-scale modeling of entangled polymers via slip-links

    NASA Astrophysics Data System (ADS)

    Andreev, Marat

    It is widely accepted that dynamics of entangled polymers can be described by the tube model. Here we advocate for an alternative approach to entanglement modeling known as slip-links. Recently, slip-links were shown to possess important advantages over tube models, namely they have strong connections to atomistic, multichain levels of description, agree with non-equilibrium thermodynamics, are applicable to any chain architecture and can be used in linear or non-linear rheology. We present a hierarchy of slip-link models that are connected to each other through successive coarse graining. Models in the hierarchy are consistent in their overlapping domains of applicability in order to allow a straightforward mapping of parameters. In particular, the most--detailed level of description has four parameters, three of which can be determined directly from atomistic simulations. On the other hand, the least--detailed member of the hierarchy is numerically accessible, and allows for non-equilibrium flow predictions of complex chain architectures. Using GPU implementation these predictions can be obtained in minutes of computational time on a single desktop equipped with a mainstream gaming GPU. The GPU code is available online for free download.

  13. DRA/NASA/ONERA Collaboration on Icing Research. Part 2; Prediction of Airfoil Ice Accretion

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Gent, R. W.; Guffond, Didier

    1997-01-01

    This report presents results from a joint study by DRA, NASA, and ONERA for the purpose of comparing, improving, and validating the aircraft icing computer codes developed by each agency. These codes are of three kinds: (1) water droplet trajectory prediction, (2) ice accretion modeling, and (3) transient electrothermal deicer analysis. In this joint study, the agencies compared their code predictions with each other and with experimental results. These comparison exercises were published in three technical reports, each with joint authorship. DRA published and had first authorship of Part 1 - Droplet Trajectory Calculations, NASA of Part 2 - Ice Accretion Prediction, and ONERA of Part 3 - Electrothermal Deicer Analysis. The results cover work done during the period from August 1986 to late 1991. As a result, all of the information in this report is dated. Where necessary, current information is provided to show the direction of current research. In this present report on ice accretion, each agency predicted ice shapes on two dimensional airfoils under icing conditions for which experimental ice shapes were available. In general, all three codes did a reasonable job of predicting the measured ice shapes. For any given experimental condition, one of the three codes predicted the general ice features (i.e., shape, impingement limits, mass of ice) somewhat better than did the other two. However, no single code consistently did better than the other two over the full range of conditions examined, which included rime, mixed, and glaze ice conditions. In several of the cases, DRA showed that the user's knowledge of icing can significantly improve the accuracy of the code prediction. Rime ice predictions were reasonably accurate and consistent among the codes, because droplets freeze on impact and the freezing model is simple. Glaze ice predictions were less accurate and less consistent among the codes, because the freezing model is more complex and is critically dependent upon unsubstantiated heat transfer and surface roughness models. Thus, heat transfer prediction methods used in the codes became the subject for a separate study in this report to compare predicted heat transfer coefficients with a limited experimental database of heat transfer coefficients for cylinders with simulated glaze and rime ice shapes. The codes did a good job of predicting heat transfer coefficients near the stagnation region of the ice shapes. But in the region of the ice horns, all three codes predicted heat transfer coefficients considerably higher than the measured values. An important conclusion of this study is that further research is needed to understand the finer detail of of the glaze ice accretion process and to develop improved glaze ice accretion models.

  14. Recognizing short coding sequences of prokaryotic genome using a novel iteratively adaptive sparse partial least squares algorithm

    PubMed Central

    2013-01-01

    Background Significant efforts have been made to address the problem of identifying short genes in prokaryotic genomes. However, most known methods are not effective in detecting short genes. Because of the limited information contained in short DNA sequences, it is very difficult to accurately distinguish between protein coding and non-coding sequences in prokaryotic genomes. We have developed a new Iteratively Adaptive Sparse Partial Least Squares (IASPLS) algorithm as the classifier to improve the accuracy of the identification process. Results For testing, we chose the short coding and non-coding sequences from seven prokaryotic organisms. We used seven feature sets (including GC content, Z-curve, etc.) of short genes. In comparison with GeneMarkS, Metagene, Orphelia, and Heuristic Approachs methods, our model achieved the best prediction performance in identification of short prokaryotic genes. Even when we focused on the very short length group ([60–100 nt)), our model provided sensitivity as high as 83.44% and specificity as high as 92.8%. These values are two or three times higher than three of the other methods while Metagene fails to recognize genes in this length range. The experiments also proved that the IASPLS can improve the identification accuracy in comparison with other widely used classifiers, i.e. Logistic, Random Forest (RF) and K nearest neighbors (KNN). The accuracy in using IASPLS was improved 5.90% or more in comparison with the other methods. In addition to the improvements in accuracy, IASPLS required ten times less computer time than using KNN or RF. Conclusions It is conclusive that our method is preferable for application as an automated method of short gene classification. Its linearity and easily optimized parameters make it practicable for predicting short genes of newly-sequenced or under-studied species. Reviewers This article was reviewed by Alexey Kondrashov, Rajeev Azad (nominated by Dr J.Peter Gogarten) and Yuriy Fofanov (nominated by Dr Janet Siefert). PMID:24067167

  15. Acoustic Power Transmission Through a Ducted Fan

    NASA Technical Reports Server (NTRS)

    Envia, Ed

    2016-01-01

    For high-speed ducted fans, when the rotor flowfield is shock-free, the main contribution to the inlet radiated acoustic power comes from the portion of the rotor stator interaction sound field that is transmitted upstream through the rotor. As such, inclusion of the acoustic transmission is an essential ingredient in the prediction of the fan inlet noise when the fan tip relative speed is subsonic. This paper describes a linearized Euler based approach to computing the acoustic transmission of fan tones through the rotor. The approach is embodied in a code called LINFLUX was applied to a candidate subsonic fan called the Advanced Ducted Propulsor (ADP). The results from this study suggest that it is possible to make such prediction with sufficient fidelity to provide an indication of the acoustic transmission trends with the fan tip speed.

  16. Model-Based Battery Management Systems: From Theory to Practice

    NASA Astrophysics Data System (ADS)

    Pathak, Manan

    Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.

  17. Unsteady-flow-field predictions for oscillating cascades

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.

    1991-01-01

    The unsteady flow field around an oscillating cascade of flat plates with zero stagger was studied by using a time marching Euler code. This case had an exact solution based on linear theory and served as a model problem for studying pressure wave propagation in the numerical solution. The importance of using proper unsteady boundary conditions, grid resolution, and time step size was shown for a moderate reduced frequency. Results show that an approximate nonreflecting boundary condition based on linear theory does a good job of minimizing reflections from the inflow and outflow boundaries and allows the placement of the boundaries to be closer to the airfoils than when reflective boundaries are used. Stretching the boundary to dampen the unsteady waves is another way to minimize reflections. Grid clustering near the plates captures the unsteady flow field better than when uniform grids are used as long as the 'Courant Friedrichs Levy' (CFL) number is less than 1 for a sufficient portion of the grid. Finally, a solution based on an optimization of grid, CFL number, and boundary conditions shows good agreement with linear theory.

  18. Linear Regression Links Transcriptomic Data and Cellular Raman Spectra.

    PubMed

    Kobayashi-Kirschvink, Koseki J; Nakaoka, Hidenori; Oda, Arisa; Kamei, Ken-Ichiro F; Nosho, Kazuki; Fukushima, Hiroko; Kanesaki, Yu; Yajima, Shunsuke; Masaki, Haruhiko; Ohta, Kunihiro; Wakamoto, Yuichi

    2018-06-08

    Raman microscopy is an imaging technique that has been applied to assess molecular compositions of living cells to characterize cell types and states. However, owing to the diverse molecular species in cells and challenges of assigning peaks to specific molecules, it has not been clear how to interpret cellular Raman spectra. Here, we provide firm evidence that cellular Raman spectra and transcriptomic profiles of Schizosaccharomyces pombe and Escherichia coli can be computationally connected and thus interpreted. We find that the dimensions of high-dimensional Raman spectra and transcriptomes measured by RNA sequencing can be reduced and connected linearly through a shared low-dimensional subspace. Accordingly, we were able to predict global gene expression profiles by applying the calculated transformation matrix to Raman spectra, and vice versa. Highly expressed non-coding RNAs contributed to the Raman-transcriptome linear correspondence more significantly than mRNAs in S. pombe. This demonstration of correspondence between cellular Raman spectra and transcriptomes is a promising step toward establishing spectroscopic live-cell omics studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Linear gyrokinetic simulations of microinstabilities within the pedestal region of H-mode NSTX discharges in a highly shaped geometry

    DOE PAGES

    Coury, M.; Guttenfelder, W.; Mikkelsen, D. R.; ...

    2016-06-30

    Linear (local) gyrokinetic predictions of edge microinstabilities in highly shaped, lithiated and non-lithiated NSTX discharges are reported using the gyrokinetic code GS2. Microtearing modes dominate the non-lithiated pedestal top. The stabilization of these modes at the lithiated pedestal top enables the electron temperature pedestal to extend further inwards, as observed experimentally. Kinetic ballooning modes are found to be unstable mainly at the mid-pedestal of both types of discharges, with un- stable trapped electron modes nearer the separatrix region. At electron wavelengths, ETG modes are found to be unstable from mid-pedestal outwards for η e, exp ~2.2 with higher growth ratesmore » for the lithiated discharge. Near the separatrix, the critical temperature gradient for driving ETG modes is reduced in the presence of lithium, re ecting the reduction of the lithiated density gradients observed experimentally. A preliminary linear study in the edge of non-lithiated discharges shows that the equilibrium shaping alters the electrostatic modes stability, found more unstable at high plasma shaping.« less

  20. Step-response of a torsional device with multiple discontinuous non-linearities: Formulation of a vibratory experiment

    NASA Astrophysics Data System (ADS)

    Krak, Michael D.; Dreyer, Jason T.; Singh, Rajendra

    2016-03-01

    A vehicle clutch damper is intentionally designed to contain multiple discontinuous non-linearities, such as multi-staged springs, clearances, pre-loads, and multi-staged friction elements. The main purpose of this practical torsional device is to transmit a wide range of torque while isolating torsional vibration between an engine and transmission. Improved understanding of the dynamic behavior of the device could be facilitated by laboratory measurement, and thus a refined vibratory experiment is proposed. The experiment is conceptually described as a single degree of freedom non-linear torsional system that is excited by an external step torque. The single torsional inertia (consisting of a shaft and torsion arm) is coupled to ground through parallel production clutch dampers, which are characterized by quasi-static measurements provided by the manufacturer. Other experimental objectives address physical dimensions, system actuation, flexural modes, instrumentation, and signal processing issues. Typical measurements show that the step response of the device is characterized by three distinct non-linear regimes (double-sided impact, single-sided impact, and no-impact). Each regime is directly related to the non-linear features of the device and can be described by peak angular acceleration values. Predictions of a simplified single degree of freedom non-linear model verify that the experiment performs well and as designed. Accordingly, the benchmark measurements could be utilized to validate non-linear models and simulation codes, as well as characterize dynamic parameters of the device including its dissipative properties.

  1. Flowfield Comparisons from Three Navier-Stokes Solvers for an Axisymmetric Separate Flow Jet

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Bridges, James; Khavaran, Abbas

    2002-01-01

    To meet new noise reduction goals, many concepts to enhance mixing in the exhaust jets of turbofan engines are being studied. Accurate steady state flowfield predictions from state-of-the-art computational fluid dynamics (CFD) solvers are needed as input to the latest noise prediction codes. The main intent of this paper was to ascertain that similar Navier-Stokes solvers run at different sites would yield comparable results for an axisymmetric two-stream nozzle case. Predictions from the WIND and the NPARC codes are compared to previously reported experimental data and results from the CRAFT Navier-Stokes solver. Similar k-epsilon turbulence models were employed in each solver, and identical computational grids were used. Agreement between experimental data and predictions from each code was generally good for mean values. All three codes underpredict the maximum value of turbulent kinetic energy. The predicted locations of the maximum turbulent kinetic energy were farther downstream than seen in the data. A grid study was conducted using the WIND code, and comments about convergence criteria and grid requirements for CFD solutions to be used as input for noise prediction computations are given. Additionally, noise predictions from the MGBK code, using the CFD results from the CRAFT code, NPARC, and WIND as input are compared to data.

  2. Development and verification of NRC`s single-rod fuel performance codes FRAPCON-3 AND FRAPTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyer, C.E.; Cunningham, M.E.; Lanning, D.D.

    1998-03-01

    The FRAPCON and FRAP-T code series, developed in the 1970s and early 1980s, are used by the US Nuclear Regulatory Commission (NRC) to predict fuel performance during steady-state and transient power conditions, respectively. Both code series are now being updated by Pacific Northwest National Laboratory to improve their predictive capabilities at high burnup levels. The newest versions of the codes are called FRAPCON-3 and FRAPTRAN. The updates to fuel property and behavior models are focusing on providing best estimate predictions under steady-state and fast transient power conditions up to extended fuel burnups (> 55 GWd/MTU). Both codes will be assessedmore » against a data base independent of the data base used for code benchmarking and an estimate of code predictive uncertainties will be made based on comparisons to the benchmark and independent data bases.« less

  3. Towards accurate cosmological predictions for rapidly oscillating scalar fields as dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureña-López, L. Arturo; Gonzalez-Morales, Alma X., E-mail: lurena@ugto.mx, E-mail: alma.gonzalez@fisica.ugto.mx

    2016-07-01

    As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. The new method makes use of appropriate angular variables that simplify the writing of the equations of motion, and which also show that the usual field variables play a secondary role in the cosmological dynamics. We apply the method to a scalar fieldmore » endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is confirmed that there exists a Jeans wavenumber k {sub J} , directly related to the suppression of linear perturbations at wavenumbers k > k {sub J} , and which is verified to be k {sub J} = a √ mH . We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction of cosmological observables from scalar field dark matter models.« less

  4. A purely Lagrangian method for computing linearly-perturbed flows in spherical geometry

    NASA Astrophysics Data System (ADS)

    Jaouen, Stéphane

    2007-07-01

    In many physical applications, one wishes to control the development of multi-dimensional instabilities around a one-dimensional (1D) complex flow. For predicting the growth rates of these perturbations, a general numerical approach is viable which consists in solving simultaneously the one-dimensional equations and their linearized form for three-dimensional perturbations. In Clarisse et al. [J.-M. Clarisse, S. Jaouen, P.-A. Raviart, A Godunov-type method in Lagrangian coordinates for computing linearly-perturbed planar-symmetric flows of gas dynamics, J. Comp. Phys. 198 (2004) 80-105], a class of Godunov-type schemes for planar-symmetric flows of gas dynamics has been proposed. Pursuing this effort, we extend these results to spherically symmetric flows. A new method to derive the Lagrangian perturbation equations, based on the canonical form of systems of conservation laws with zero entropy flux [B. Després, Lagrangian systems of conservation laws. Invariance properties of Lagrangian systems of conservation laws, approximate Riemann solvers and the entropy condition, Numer. Math. 89 (2001) 99-134; B. Després, C. Mazeran, Lagrangian gas dynamics in two dimensions and Lagrangian systems, Arch. Rational Mech. Anal. 178 (2005) 327-372] is also described. It leads to many advantages. First of all, many physical problems we are interested in enter this formalism (gas dynamics, two-temperature plasma equations, ideal magnetohydrodynamics, etc.) whatever is the geometry. Secondly, a class of numerical entropic schemes is available for the basic flow [11]. Last, linearizing and devising numerical schemes for the perturbed flow is straightforward. The numerical capabilities of these methods are illustrated on three test cases of increasing difficulties and we show that - due to its simplicity and its low computational cost - the Linear Perturbations Code (LPC) is a powerful tool to understand and predict the development of hydrodynamic instabilities in the linear regime.

  5. Influence of flowfield and vehicle parameters on engineering aerothermal methods

    NASA Technical Reports Server (NTRS)

    Wurster, Kathryn E.; Zoby, E. Vincent; Thompson, Richard A.

    1989-01-01

    The reliability and flexibility of three engineering codes used in the aerosphace industry (AEROHEAT, INCHES, and MINIVER) were investigated by comparing the results of these codes with Reentry F flight data and ground-test heat-transfer data for a range of cone angles, and with the predictions obtained using the detailed VSL3D code; the engineering solutions were also compared. In particular, the impact of several vehicle and flow-field parameters on the heat transfer and the capability of the engineering codes to predict these results were determined. It was found that entropy, pressure gradient, nose bluntness, gas chemistry, and angle of attack all affect heating levels. A comparison of the results of the three engineering codes with Reentry F flight data and with the predictions obtained of the VSL3D code showed a very good agreement in the regions of the applicability of the codes. It is emphasized that the parameters used in this study can significantly influence the actual heating levels and the prediction capability of a code.

  6. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  7. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  8. Risk adjustment for health care financing in chronic disease: what are we missing by failing to account for disease severity?

    PubMed

    Omachi, Theodore A; Gregorich, Steven E; Eisner, Mark D; Penaloza, Renee A; Tolstykh, Irina V; Yelin, Edward H; Iribarren, Carlos; Dudley, R Adams; Blanc, Paul D

    2013-08-01

    Adjustment for differing risks among patients is usually incorporated into newer payment approaches, and current risk models rely on age, sex, and diagnosis codes. It is unknown the extent to which controlling additionally for disease severity improves cost prediction. Failure to adjust for within-disease variation may create incentives to avoid sicker patients. We address this issue among patients with chronic obstructive pulmonary disease (COPD). Cost and clinical data were collected prospectively from 1202 COPD patients at Kaiser Permanente. Baseline analysis included age, sex, and diagnosis codes (using the Diagnostic Cost Group Relative Risk Score) in a general linear model predicting total medical costs in the following year. We determined whether adding COPD severity measures-forced expiratory volume in 1 second, 6-Minute Walk Test, dyspnea score, body mass index, and BODE Index (composite of the other 4 measures)-improved predictions. Separately, we examined household income as a cost predictor. Mean costs were $12,334/y. Controlling for Relative Risk Score, each ½ SD worsening in COPD severity factor was associated with $629 to $1135 in increased annual costs (all P<0.01). The lowest stratum of forced expiratory volume in 1 second (<30% normal) predicted $4098 (95% confidence interval, $576-$8773) additional costs. Household income predicted excess costs when added to the baseline model (P=0.038), but this became nonsignificant when also incorporating the BODE Index. Disease severity measures explain significant cost variations beyond current risk models, and adding them to such models appears important to fairly compensate organizations that accept responsibility for sicker COPD patients. Appropriately controlling for disease severity also accounts for costs otherwise associated with lower socioeconomic status.

  9. Predicting rotation for ITER via studies of intrinsic torque and momentum transport in DIII-D

    DOE PAGES

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.; ...

    2017-03-30

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  10. Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.

  11. HSCT Ref-H Transonic Flap Data Base: Wind-Tunnel Test and Comparison with Theory

    NASA Technical Reports Server (NTRS)

    Vijgen, Paul M.

    1999-01-01

    In cooperation with personnel from the Boeing ANP Laboratory and NASA Langley, a performance test was conducted using the Reference-H 1.675% model ("NASA Modular Model") without nacelles at the NASA Langley 16-Ft Transonic Tunnel. The main objective of the test was to determine the drag reduction achievable with leading-edge and trailing-edge flaps deflected along the outboard wing span at transonic Mach numbers (M = 0.9 to 1.2) for purpose of preliminary design and for comparison with computational predictions. The obtained drag data with flap deflections for Mach numbers of 1.07 to 1.20 are unique for the Reference H wing. Four leading-edge and two trailing-edge flap deflection angles were tested at a mean-wing chord-Reynolds number of about 5.7 million. An outboard-wing leading-edge flap deflection of 81 provides a 4.5 percent drag reduction at M = 1.2 A = 0.2), and much larger values at lower Mach numbers with larger flap deflections. The present results for the baseline (no flaps deflected) compare reasonably well with previous Boeing and NASA Ref-H tunnel tests, including high-Reynolds number NTF results. Viscous CFD simulations using the OVERFLOW thin-layer N.S. method properly predict the observed trend in drag reduction at M = 1.2 as function of leading-edge flap deflection. Modified linear theory properly predicts the flap effects on drag at subsonic conditions (Aero2S code), and properly predicts the absolute drag for the 40 and 80 leading-edge deflection at M = 1.2 (A389 code).

  12. Predicting the Where and the How Big of Solar Flares

    NASA Astrophysics Data System (ADS)

    Barnes, Graham; Leka, K. D.; Gilchrist, Stuart

    2017-08-01

    The approach to predicting solar flares generally characterizes global properties of a solar active region, for example the total magnetic flux or the total length of a sheared magnetic neutral line, and compares new data (from which to make a prediction) to similar observations of active regions and their associated propensity for flare production. We take here a different tack, examining solar active regions in the context of their energy storage capacity. Specifically, we characterize not the region as a whole, but summarize the energy-release prospects of different sub-regions within, using a sub-area analysis of the photospheric boundary, the CFIT non-linear force-free extrapolation code, and the Minimum Current Corona model. We present here early results from this approach whose objective is to understand the different pathways available for regions to release stored energy, thus eventually providing better estimates of the where (what sub-areas are storing how much energy) and the how big (how much energy is stored, and how much is available for release) of solar flares.

  13. LIGKA: A linear gyrokinetic code for the description of background kinetic and fast particle effects on the MHD stability in tokamaks

    NASA Astrophysics Data System (ADS)

    Lauber, Ph.; Günter, S.; Könies, A.; Pinches, S. D.

    2007-09-01

    In a plasma with a population of super-thermal particles generated by heating or fusion processes, kinetic effects can lead to the additional destabilisation of MHD modes or even to additional energetic particle modes. In order to describe these modes, a new linear gyrokinetic MHD code has been developed and tested, LIGKA (linear gyrokinetic shear Alfvén physics) [Ph. Lauber, Linear gyrokinetic description of fast particle effects on the MHD stability in tokamaks, Ph.D. Thesis, TU München, 2003; Ph. Lauber, S. Günter, S.D. Pinches, Phys. Plasmas 12 (2005) 122501], based on a gyrokinetic model [H. Qin, Gyrokinetic theory and computational methods for electromagnetic perturbations in tokamaks, Ph.D. Thesis, Princeton University, 1998]. A finite Larmor radius expansion together with the construction of some fluid moments and specification to the shear Alfvén regime results in a self-consistent, electromagnetic, non-perturbative model, that allows not only for growing or damped eigenvalues but also for a change in mode-structure of the magnetic perturbation due to the energetic particles and background kinetic effects. Compared to previous implementations [H. Qin, mentioned above], this model is coded in a more general and comprehensive way. LIGKA uses a Fourier decomposition in the poloidal coordinate and a finite element discretisation in the radial direction. Both analytical and numerical equilibria can be treated. Integration over the unperturbed particle orbits is performed with the drift-kinetic HAGIS code [S.D. Pinches, Ph.D. Thesis, The University of Nottingham, 1996; S.D. Pinches et al., CPC 111 (1998) 131] which accurately describes the particles' trajectories. This allows finite-banana-width effects to be implemented in a rigorous way since the linear formulation of the model allows the exchange of the unperturbed orbit integration and the discretisation of the perturbed potentials in the radial direction. Successful benchmarks for toroidal Alfvén eigenmodes (TAEs) and kinetic Alfvén waves (KAWs) with analytical results, ideal MHD codes, drift-kinetic codes and other codes based on kinetic models are reported.

  14. Statistical Analysis of the AIAA Drag Prediction Workshop CFD Solutions

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Hemsch, Michael J.

    2007-01-01

    The first AIAA Drag Prediction Workshop (DPW), held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third AIAA Drag Prediction Workshop, held in June 2006, focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This report compares the transonic cruise prediction results of the second and third workshops using statistical analysis.

  15. Analysis of the runoff generation mechanism for the investigation of the SCS-CN method applicability to a partial area experimental watershed

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.; Dercas, N.; Londra, P. A.

    2009-01-01

    The Soil Conservation Service Curve Number (SCS-CN) method is widely used for predicting direct runoff volume for a given rainfall event. The applicability of the SCS-CN method and the runoff generation mechanism were thoroughly analysed in a Mediterranean experimental watershed in Greece. The region is characterized by a Mediterranean semi-arid climate. A detailed land cover and soil survey using remote sensing and GIS techniques, showed that the watershed is dominated by coarse soils with high hydraulic conductivities, whereas a smaller part is covered with medium textured soils and impervious surfaces. The analysis indicated that the SCS-CN method fails to predict runoff for the storm events studied, and that there is a strong correlation between the CN values obtained from measured runoff and the rainfall depth. The hypothesis that this correlation could be attributed to the existence of an impermeable part in a very permeable watershed was examined in depth, by developing a numerical simulation water flow model for predicting surface runoff generated from each of the three soil types of the watershed. Numerical runs were performed using the HYDRUS-1D code. The results support the validity of this hypothesis for most of the events examined where the linear runoff formula provides better results than the SCS-CN method. The runoff coefficient of this formula can be taken equal to the percentage of the impervious area. However, the linear formula should be applied with caution in case of extreme events with very high rainfall intensities. In this case, the medium textured soils may significantly contribute to the total runoff and the linear formula may significantly underestimate the runoff produced.

  16. Investigation of the direct runoff generation mechanism for the analysis of the SCS-CN method applicability to a partial area experimental watershed

    NASA Astrophysics Data System (ADS)

    Soulis, K. X.; Valiantzas, J. D.; Dercas, N.; Londra, P. A.

    2009-05-01

    The Soil Conservation Service Curve Number (SCS-CN) method is widely used for predicting direct runoff volume for a given rainfall event. The applicability of the SCS-CN method and the direct runoff generation mechanism were thoroughly analysed in a Mediterranean experimental watershed in Greece. The region is characterized by a Mediterranean semi-arid climate. A detailed land cover and soil survey using remote sensing and GIS techniques, showed that the watershed is dominated by coarse soils with high hydraulic conductivities, whereas a smaller part is covered with medium textured soils and impervious surfaces. The analysis indicated that the SCS-CN method fails to predict runoff for the storm events studied, and that there is a strong correlation between the CN values obtained from measured runoff and the rainfall depth. The hypothesis that this correlation could be attributed to the existence of an impermeable part in a very permeable watershed was examined in depth, by developing a numerical simulation water flow model for predicting surface runoff generated from each of the three soil types of the watershed. Numerical runs were performed using the HYDRUS-1D code. The results support the validity of this hypothesis for most of the events examined where the linear runoff formula provides better results than the SCS-CN method. The runoff coefficient of this formula can be taken equal to the percentage of the impervious area. However, the linear formula should be applied with caution in case of extreme events with very high rainfall intensities. In this case, the medium textured soils may significantly contribute to the total runoff and the linear formula may significantly underestimate the runoff produced.

  17. Inter-view prediction of intra mode decision for high-efficiency video coding-based multiview video coding

    NASA Astrophysics Data System (ADS)

    da Silva, Thaísa Leal; Agostini, Luciano Volcan; da Silva Cruz, Luis A.

    2014-05-01

    Intra prediction is a very important tool in current video coding standards. High-efficiency video coding (HEVC) intra prediction presents relevant gains in encoding efficiency when compared to previous standards, but with a very important increase in the computational complexity since 33 directional angular modes must be evaluated. Motivated by this high complexity, this article presents a complexity reduction algorithm developed to reduce the HEVC intra mode decision complexity targeting multiview videos. The proposed algorithm presents an efficient fast intra prediction compliant with singleview and multiview video encoding. This fast solution defines a reduced subset of intra directions according to the video texture and it exploits the relationship between prediction units (PUs) of neighbor depth levels of the coding tree. This fast intra coding procedure is used to develop an inter-view prediction method, which exploits the relationship between the intra mode directions of adjacent views to further accelerate the intra prediction process in multiview video encoding applications. When compared to HEVC simulcast, our method achieves a complexity reduction of up to 47.77%, at the cost of an average BD-PSNR loss of 0.08 dB.

  18. Muon catalyzed fusion beam window mechanical strength testing and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ware, A.G.; Zabriskie, J.M.

    A thin aluminum window (0.127 mm (0.005-inch) thick x 146 mm (5 3/4-inch) diameter) of 2024-T6 alloy was modeled and analyzed using the ABAQUS non-linear finite element analysis code. A group of windows was fabricated, heat-treated and subsequently tested. Testing included both ultimate burst pressure and fatigue. Fatigue testing cycles involved ''oil-canning'' behavior representing vacuum purge and reversal to pressure. Test results are compared to predictions and the mode of failure is discussed. Operational requirements, based on the above analysis and correlational testing, for the actual beam windows are discussed. 1 ref., 3 figs.

  19. Component-specific modeling

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.

    1985-01-01

    A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.

  20. Quasilinear Line Broadened Model for Energetic Particle Transport

    NASA Astrophysics Data System (ADS)

    Ghantous, Katy; Gorelenkov, Nikolai; Berk, Herbert

    2011-10-01

    We present a self-consistent quasi-linear model that describes wave-particle interaction in toroidal geometry and computes fast ion transport during TAE mode evolution. The model bridges the gap between single mode resonances, where it predicts the analytically expected saturation levels, and the case of multiple modes overlapping, where particles diffuse across phase space. Results are presented in the large aspect ratio limit where analytic expressions are used for Fourier harmonics of the power exchange between waves and particles, . Implemention of a more realistic mode structure calculated by NOVAK code are also presented. This work is funded by DOE contract DE-AC02-09CH11466.

  1. Transonic Drag Prediction on a DLR-F6 Transport Configuration Using Unstructured Grid Solvers

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Frink, N. T.; Mavriplis, D. J.; Rausch, R. D.; Milholen, W. E.

    2004-01-01

    A second international AIAA Drag Prediction Workshop (DPW-II) was organized and held in Orlando Florida on June 21-22, 2003. The primary purpose was to inves- tigate the code-to-code uncertainty. address the sensitivity of the drag prediction to grid size and quantify the uncertainty in predicting nacelle/pylon drag increments at a transonic cruise condition. This paper presents an in-depth analysis of the DPW-II computational results from three state-of-the-art unstructured grid Navier-Stokes flow solvers exercised on similar families of tetrahedral grids. The flow solvers are USM3D - a tetrahedral cell-centered upwind solver. FUN3D - a tetrahedral node-centered upwind solver, and NSU3D - a general element node-centered central-differenced solver. For the wingbody, the total drag predicted for a constant-lift transonic cruise condition showed a decrease in code-to-code variation with grid refinement as expected. For the same flight condition, the wing/body/nacelle/pylon total drag and the nacelle/pylon drag increment predicted showed an increase in code-to-code variation with grid refinement. Although the range in total drag for the wingbody fine grids was only 5 counts, a code-to-code comparison of surface pressures and surface restricted streamlines indicated that the three solvers were not all converging to the same flow solutions- different shock locations and separation patterns were evident. Similarly, the wing/body/nacelle/pylon solutions did not appear to be converging to the same flow solutions. Overall, grid refinement did not consistently improve the correlation with experimental data for either the wingbody or the wing/body/nacelle pylon configuration. Although the absolute values of total drag predicted by two of the solvers for the medium and fine grids did not compare well with the experiment, the incremental drag predictions were within plus or minus 3 counts of the experimental data. The correlation with experimental incremental drag was not significantly changed by specifying transition. Although the sources of code-to-code variation in force and moment predictions for the three unstructured grid codes have not yet been identified, the current study reinforces the necessity of applying multiple codes to the same application to assess uncertainty.

  2. Linear calculations of edge current driven kink modes with BOUT++ code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y.; Lawrence Livermore National Laboratory, Livermore, California 94550

    This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linearmore » growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.« less

  3. Linear microbunching analysis for recirculation machines

    DOE PAGES

    Tsai, C. -Y.; Douglas, D.; Li, R.; ...

    2016-11-28

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  4. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  5. Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles

    PubMed Central

    Leavitt, Matthew L.; Pieper, Florian; Sachs, Adam J.; Martinez-Trujillo, Julio C.

    2017-01-01

    Neurons in the primate lateral prefrontal cortex (LPFC) encode working memory (WM) representations via sustained firing, a phenomenon hypothesized to arise from recurrent dynamics within ensembles of interconnected neurons. Here, we tested this hypothesis by using microelectrode arrays to examine spike count correlations (rsc) in LPFC neuronal ensembles during a spatial WM task. We found a pattern of pairwise rsc during WM maintenance indicative of stronger coupling between similarly tuned neurons and increased inhibition between dissimilarly tuned neurons. We then used a linear decoder to quantify the effects of the high-dimensional rsc structure on information coding in the neuronal ensembles. We found that the rsc structure could facilitate or impair coding, depending on the size of the ensemble and tuning properties of its constituent neurons. A simple optimization procedure demonstrated that near-maximum decoding performance could be achieved using a relatively small number of neurons. These WM-optimized subensembles were more signal correlation (rsignal)-diverse and anatomically dispersed than predicted by the statistics of the full recorded population of neurons, and they often contained neurons that were poorly WM-selective, yet enhanced coding fidelity by shaping the ensemble’s rsc structure. We observed a pattern of rsc between LPFC neurons indicative of recurrent dynamics as a mechanism for WM-related activity and that the rsc structure can increase the fidelity of WM representations. Thus, WM coding in LPFC neuronal ensembles arises from a complex synergy between single neuron coding properties and multidimensional, ensemble-level phenomena. PMID:28275096

  6. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  7. Product diffusion through on-demand information-seeking behaviour.

    PubMed

    Riedl, Christoph; Bjelland, Johannes; Canright, Geoffrey; Iqbal, Asif; Engø-Monsen, Kenth; Qureshi, Taimur; Sundsøy, Pål Roe; Lazer, David

    2018-02-01

    Most models of product adoption predict S-shaped adoption curves. Here we report results from two country-scale experiments in which we find linear adoption curves. We show evidence that the observed linear pattern is the result of active information-seeking behaviour: individuals actively pulling information from several central sources facilitated by modern Internet searches. Thus, a constant baseline rate of interest sustains product diffusion, resulting in a linear diffusion process instead of the S-shaped curve of adoption predicted by many diffusion models. The main experiment seeded 70 000 (48 000 in Experiment 2) unique voucher codes for the same product with randomly sampled nodes in a social network of approximately 43 million individuals with about 567 million ties. We find that the experiment reached over 800 000 individuals with 80% of adopters adopting the same product-a winner-take-all dynamic consistent with search engine driven rankings that would not have emerged had the products spread only through a network of social contacts. We provide evidence for (and characterization of) this diffusion process driven by active information-seeking behaviour through analyses investigating (a) patterns of geographical spreading; (b) the branching process; and (c) diffusion heterogeneity. Using data on adopters' geolocation we show that social spreading is highly localized, while on-demand diffusion is geographically independent. We also show that cascades started by individuals who actively pull information from central sources are more effective at spreading the product among their peers. © 2018 The Authors.

  8. Product diffusion through on-demand information-seeking behaviour

    PubMed Central

    Bjelland, Johannes; Canright, Geoffrey; Iqbal, Asif; Qureshi, Taimur; Sundsøy, Pål Roe

    2018-01-01

    Most models of product adoption predict S-shaped adoption curves. Here we report results from two country-scale experiments in which we find linear adoption curves. We show evidence that the observed linear pattern is the result of active information-seeking behaviour: individuals actively pulling information from several central sources facilitated by modern Internet searches. Thus, a constant baseline rate of interest sustains product diffusion, resulting in a linear diffusion process instead of the S-shaped curve of adoption predicted by many diffusion models. The main experiment seeded 70 000 (48 000 in Experiment 2) unique voucher codes for the same product with randomly sampled nodes in a social network of approximately 43 million individuals with about 567 million ties. We find that the experiment reached over 800 000 individuals with 80% of adopters adopting the same product—a winner-take-all dynamic consistent with search engine driven rankings that would not have emerged had the products spread only through a network of social contacts. We provide evidence for (and characterization of) this diffusion process driven by active information-seeking behaviour through analyses investigating (a) patterns of geographical spreading; (b) the branching process; and (c) diffusion heterogeneity. Using data on adopters' geolocation we show that social spreading is highly localized, while on-demand diffusion is geographically independent. We also show that cascades started by individuals who actively pull information from central sources are more effective at spreading the product among their peers. PMID:29467257

  9. Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu

    2006-04-17

    TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chrystal, C.; Grierson, B. A.; Staebler, G. M.

    Here, experiments at the DIII-D tokamak have used dimensionless parameter scans to investigate the dependencies of intrinsic torque and momentum transport in order to inform a prediction of the rotation profile in ITER. Measurements of intrinsic torque profiles and momentum confinement time in dimensionless parameter scans of normalized gyroradius and collisionality are used to predict the amount of intrinsic rotation in the pedestal of ITER. Additional scans of T e/T i and safety factor are used to determine the accuracy of momentum flux predictions of the quasi-linear gyrokinetic code TGLF. In these scans, applications of modulated torque are used tomore » measure the incremental momentum diffusivity, and results are consistent with the E x B shear suppression of turbulent transport. These incremental transport measurements are also compared with the TGLF results. In order to form a prediction of the rotation profile for ITER, the pedestal prediction is used as a boundary condition to a simulation that uses TGLF to determine the transport in the core of the plasma. The predicted rotation is ≈20 krad/s in the core, lower than in many current tokamak operating scenarios. TGLF predictions show that this rotation is still significant enough to have a strong effect on confinement via E x B shear.« less

  11. CELFE/NASTRAN Code for the Analysis of Structures Subjected to High Velocity Impact

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1978-01-01

    CELFE (Coupled Eulerian Lagrangian Finite Element)/NASTRAN Code three-dimensional finite element code has the capability for analyzing of structures subjected to high velocity impact. The local response is predicted by CELFE and, for large problems, the far-field impact response is predicted by NASTRAN. The coupling of the CELFE code with NASTRAN (CELFE/NASTRAN code) and the application of the code to selected three-dimensional high velocity impact problems are described.

  12. Indexing sensory plasticity: Evidence for distinct Predictive Coding and Hebbian learning mechanisms in the cerebral cortex.

    PubMed

    Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D

    2018-04-30

    The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Progress on the DPASS project

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.; Bogatu, I. N.; Svidzinski, V. A.

    2015-11-01

    A novel project to develop Disruption Prediction And Simulation Suite (DPASS) of comprehensive computational tools to predict, model, and analyze disruption events in tokamaks has been recently started at FAR-TECH Inc. DPASS will eventually address the following aspects of the disruption problem: MHD, plasma edge dynamics, plasma-wall interaction, generation and losses of runaway electrons. DPASS uses the 3-D Disruption Simulation Code (DSC-3D) as a core tool and will have a modular structure. DSC is a one fluid non-linear, time-dependent 3D MHD code to simulate dynamics of tokamak plasma surrounded by pure vacuum B-field in the real geometry of a conducting tokamak vessel. DSC utilizes the adaptive meshless technique with adaptation to the moving plasma boundary, with accurate magnetic flux conservation and resolution of the plasma surface current. DSC has also an option to neglect the plasma inertia to eliminate fast magnetosonic scale. This option can be turned on/off as needed. During Phase I of the project, two modules will be developed: the computational module for modeling the massive gas injection and main plasma respond; and the module for nanoparticle plasma jet injection as an innovative disruption mitigation scheme. We will report on this development progress. Work is supported by the US DOE SBIR grant # DE-SC0013727.

  14. Can evaluation of a dental procedure at the outset of learning predict later performance at the preclinical level? A pilot study.

    PubMed

    Polyzois, Ioannis; Claffey, Noel; McDonald, Albhe; Hussey, David; Quinn, Frank

    2011-05-01

    The purpose of this study was to examine the effectiveness of conventional pre-clinical training in dentistry and to determine if evaluation of a dental procedure at the beginning of dental training can be a predictor for future performance. A group of second year dental students with no previous experience in operative dentistry were asked to prepare a conventional class I cavity on a lower first molar typodont. Their first preparation was carried out after an introductory lecture and a demonstration and their second at the end of conventional training. The prepared typodonts were coded and blindly scored for the traditional assessment criteria of outline form, retention form, smoothness, cavity depth and cavity margin angulation. Once the codes were broken, a paired t-test was used to compare the difference between the means of before and after scores (P<0.0001) and a Pearson's linear correlation to test the association (r=0.4). From the results of this study, we could conclude that conventional preclinical training results in a significant improvement in the manual skills of the dental students and that the dental procedure used had only a limited predictive value for later performance at the preclinical level. © 2011 John Wiley & Sons A/S.

  15. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.

  16. Correlation approach to identify coding regions in DNA sequences

    NASA Technical Reports Server (NTRS)

    Ossadnik, S. M.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Mantegna, R. N.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1994-01-01

    Recently, it was observed that noncoding regions of DNA sequences possess long-range power-law correlations, whereas coding regions typically display only short-range correlations. We develop an algorithm based on this finding that enables investigators to perform a statistical analysis on long DNA sequences to locate possible coding regions. The algorithm is particularly successful in predicting the location of lengthy coding regions. For example, for the complete genome of yeast chromosome III (315,344 nucleotides), at least 82% of the predictions correspond to putative coding regions; the algorithm correctly identified all coding regions larger than 3000 nucleotides, 92% of coding regions between 2000 and 3000 nucleotides long, and 79% of coding regions between 1000 and 2000 nucleotides. The predictive ability of this new algorithm supports the claim that there is a fundamental difference in the correlation property between coding and noncoding sequences. This algorithm, which is not species-dependent, can be implemented with other techniques for rapidly and accurately locating relatively long coding regions in genomic sequences.

  17. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.

  18. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.

  19. Early post-stroke cognition in stroke rehabilitation patients predicts functional outcome at 13 months.

    PubMed

    Wagle, Jørgen; Farner, Lasse; Flekkøy, Kjell; Bruun Wyller, Torgeir; Sandvik, Leiv; Fure, Brynjar; Stensrød, Brynhild; Engedal, Knut

    2011-01-01

    To identify prognostic factors associated with functional outcome at 13 months in a sample of stroke rehabilitation patients. Specifically, we hypothesized that cognitive functioning early after stroke would predict long-term functional outcome independently of other factors. 163 stroke rehabilitation patients underwent a structured neuropsychological examination 2-3 weeks after hospital admittance, and their functional status was subsequently evaluated 13 months later with the modified Rankin Scale (mRS) as outcome measure. Three predictive models were built using linear regression analyses: a biological model (sociodemographics, apolipoprotein E genotype, prestroke vascular factors, lesion characteristics and neurological stroke-related impairment); a functional model (pre- and early post-stroke cognitive functioning, personal and instrumental activities of daily living, ADL, and depressive symptoms), and a combined model (including significant variables, with p value <0.05, from the biological and functional models). A combined model of 4 variables best predicted long-term functional outcome with explained variance of 49%: neurological impairment (National Institute of Health Stroke Scale; β = 0.402, p < 0.001), age (β = 0.233, p = 0.001), post-stroke cognitive functioning (Repeatable Battery of Neuropsychological Status, RBANS; β = -0.248, p = 0.001) and prestroke personal ADL (Barthel Index; β = -0.217, p = 0.002). Further linear regression analyses of which RBANS indexes and subtests best predicted long-term functional outcome showed that Coding (β = -0.484, p < 0.001) and Figure Copy (β = -0.233, p = 0.002) raw scores at baseline explained 42% of the variance in mRS scores at follow-up. Early post-stroke cognitive functioning as measured by the RBANS is a significant and independent predictor of long-term functional post-stroke outcome. Copyright © 2011 S. Karger AG, Basel.

  20. Applications of Coding in Network Communications

    ERIC Educational Resources Information Center

    Chang, Christopher SungWook

    2012-01-01

    This thesis uses the tool of network coding to investigate fast peer-to-peer file distribution, anonymous communication, robust network construction under uncertainty, and prioritized transmission. In a peer-to-peer file distribution system, we use a linear optimization approach to show that the network coding framework significantly simplifies…

  1. Soil amplification with a strong impedance contrast: Boston, Massachusetts

    USGS Publications Warehouse

    Baise, Laurie G.; Kaklamanos, James; Berry, Bradford M; Thompson, Eric M.

    2016-01-01

    In this study, we evaluate the effect of strong sediment/bedrock impedance contrasts on soil amplification in Boston, Massachusetts, for typical sites along the Charles and Mystic Rivers. These sites can be characterized by artificial fill overlying marine sediments overlying glacial till and bedrock, where the depth to bedrock ranges from 20 to 80 m. The marine sediments generally consist of organic silts, sand, and Boston Blue Clay. We chose these sites because they represent typical foundation conditions in the city of Boston, and the soil conditions are similar to other high impedance contrast environments. The sediment/bedrock interface in this region results in an impedance ratio on the order of ten, which in turn results in a significant amplification of the ground motion. Using stratigraphic information derived from numerous boreholes across the region paired with geologic and geomorphologic constraints, we develop a depth-to-bedrock model for the greater Boston region. Using shear-wave velocity profiles from 30 locations, we develop average velocity profiles for sites mapped as artificial fill, glaciofluvial deposits, and bedrock. By pairing the depth-to-bedrock model with the surficial geology and the average shear-wave velocity profiles, we can predict soil amplification in Boston. We compare linear and equivalent-linear site response predictions for a soil layer of varying thickness over bedrock, and assess the effects of varying the bedrock shear-wave velocity (VSb) and quality factor (Q). In a moderate seismicity region like Boston, many earthquakes will result in ground motions that can be modeled with linear site response methods. We also assess the effect of bedrock depth on soil amplification for a generic soil profile in artificial fill, using both linear and equivalent-linear site response models. Finally, we assess the accuracy of the model results by comparing the predicted (linear site response) and observed site response at the Northeastern University (NEU) vertical seismometer array during the 2011 M 5.8 Mineral, Virginia, earthquake. Site response at the NEU vertical array results in amplification on the order of 10 times at a period between 0.7-0.8 s. The results from this study provide evidence that the mean short-period and mean intermediate-period amplification used in design codes (i.e., from the Fa and Fv site coefficients) may underpredict soil amplification in strong impedance contrast environments such as Boston.

  2. Group delay variations of GPS transmitting and receiving antennas

    NASA Astrophysics Data System (ADS)

    Wanninger, Lambert; Sumaya, Hael; Beer, Susanne

    2017-09-01

    GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne-Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.

  3. Linear energy transfer in water phantom within SHIELD-HIT transport code

    NASA Astrophysics Data System (ADS)

    Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.

    2017-02-01

    The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.

  4. Rate-compatible protograph LDPC code families with linear minimum distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)

    2012-01-01

    Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.

  5. Effect of doping on electronic properties of HgSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nag, Abhinav, E-mail: abhinavn76@gmail.com; Sastri, O. S. K. S., E-mail: sastri.osks@gmail.com; Kumar, Jagdish, E-mail: jagdishphysicist@gmail.com

    2016-05-23

    First principle study of electronic properties of pure and doped HgSe have been performed using all electron Full Potential Linearized Augmented Plane Wave (FP-LAPW) method using ELK code. The electronic exchange and co-relations are considered using Generalized Gradient Approach (GGA). Lattice parameter, Density of States (DOS) and Band structure calculations have been performed. The total energy curve (Energy vs Lattice parameter), DOS and band structure calculations are in good agreement with the experimental values and those obtained using other DFT codes. The doped material is studied within the Virtual Crystal Approximation (VCA) with doping levels of 10% to 25% ofmore » electrons (hole) per unit cell. Results predict zero band gap in undopedHgSe and bands meet at Fermi level near the symmetry point Γ. For doped HgSe, we found that by electron (hole) doping, the point where conduction and valence bands meet can be shifted below (above) the fermi level.« less

  6. Environmental enrichment normalizes hippocampal timing coding in a malformed hippocampus.

    PubMed

    Hernan, Amanda E; Mahoney, J Matthew; Curry, Willie; Richard, Greg; Lucas, Marcella M; Massey, Andrew; Holmes, Gregory L; Scott, Rod C

    2018-01-01

    Neurodevelopmental insults leading to malformations of cortical development (MCD) are a common cause of psychiatric disorders, learning impairments and epilepsy. In the methylazoxymethanol (MAM) model of MCDs, animals have impairments in spatial cognition that, remarkably, are improved by post-weaning environmental enrichment (EE). To establish how EE impacts network-level mechanisms of spatial cognition, hippocampal in vivo single unit recordings were performed in freely moving animals in an open arena. We took a generalized linear modeling approach to extract fine spike timing (FST) characteristics and related these to place cell fidelity used as a surrogate of spatial cognition. We find that MAM disrupts FST and place-modulated rate coding in hippocampal CA1 and that EE improves many FST parameters towards normal. Moreover, FST parameters predict spatial coherence of neurons, suggesting that mechanisms determining altered FST are responsible for impaired cognition in MCDs. This suggests that FST parameters could represent a therapeutic target to improve cognition even in the context of a brain that develops with a structural abnormality.

  7. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  8. Prediction task guided representation learning of medical codes in EHR.

    PubMed

    Cui, Liwen; Xie, Xiaolei; Shen, Zuojun

    2018-06-18

    There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.

  9. Comparisons of time explicit hybrid kinetic-fluid code Architect for Plasma Wakefield Acceleration with a full PIC code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massimo, F., E-mail: francesco.massimo@ensta-paristech.fr; Dipartimento SBAI, Università di Roma “La Sapienza“, Via A. Scarpa 14, 00161 Roma; Atzeni, S.

    Architect, a time explicit hybrid code designed to perform quick simulations for electron driven plasma wakefield acceleration, is described. In order to obtain beam quality acceptable for applications, control of the beam-plasma-dynamics is necessary. Particle in Cell (PIC) codes represent the state-of-the-art technique to investigate the underlying physics and possible experimental scenarios; however PIC codes demand the necessity of heavy computational resources. Architect code substantially reduces the need for computational resources by using a hybrid approach: relativistic electron bunches are treated kinetically as in a PIC code and the background plasma as a fluid. Cylindrical symmetry is assumed for themore » solution of the electromagnetic fields and fluid equations. In this paper both the underlying algorithms as well as a comparison with a fully three dimensional particle in cell code are reported. The comparison highlights the good agreement between the two models up to the weakly non-linear regimes. In highly non-linear regimes the two models only disagree in a localized region, where the plasma electrons expelled by the bunch close up at the end of the first plasma oscillation.« less

  10. GPU implementation of the linear scaling three dimensional fragment method for large scale electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang

    2017-02-01

    LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.

  11. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Benjamin; Koyama, Kazuya, E-mail: benjamin.bose@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk

    We develop a code to produce the power spectrum in redshift space based on standard perturbation theory (SPT) at 1-loop order. The code can be applied to a wide range of modified gravity and dark energy models using a recently proposed numerical method by A.Taruya to find the SPT kernels. This includes Horndeski's theory with a general potential, which accommodates both chameleon and Vainshtein screening mechanisms and provides a non-linear extension of the effective theory of dark energy up to the third order. Focus is on a recent non-linear model of the redshift space power spectrum which has been shownmore » to model the anisotropy very well at relevant scales for the SPT framework, as well as capturing relevant non-linear effects typical of modified gravity theories. We provide consistency checks of the code against established results and elucidate its application within the light of upcoming high precision RSD data.« less

  13. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  14. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  15. Effect of initial phase on error in electron energy obtained using paraxial approximation for a focused laser pulse in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi

    2015-09-14

    We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less

  16. Hybrid density-functional calculations of phonons in LaCoO3

    NASA Astrophysics Data System (ADS)

    Gryaznov, Denis; Evarestov, Robert A.; Maier, Joachim

    2010-12-01

    Phonon frequencies at Γ point in nonmagnetic rhombohedral phase of LaCoO3 were calculated using density-functional theory with hybrid exchange correlation functional PBE0. The calculations involved a comparison of results for two types of basis functions commonly used in ab initio calculations, namely, the plane-wave approach and linear combination of atomic orbitals, as implemented in VASP and CRYSTAL computer codes, respectively. A good qualitative, but also within an error margin of less than 30%, a quantitative agreement was observed not only between the two formalisms but also between theoretical and experimental phonon frequency predictions. Moreover, the correlation between the phonon symmetries in cubic and rhombohedral phases is discussed in detail on the basis of group-theoretical analysis. It is concluded that the hybrid PBE0 functional is able to predict correctly the phonon properties in LaCoO3 .

  17. Empirical modeling of environment-enhanced fatigue crack propagation in structural alloys for component life prediction

    NASA Technical Reports Server (NTRS)

    Richey, Edward, III

    1995-01-01

    This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).

  18. A new hyper-elastic model for predicting multi-axial behaviour of rubber-like materials: formulation and computational aspects

    NASA Astrophysics Data System (ADS)

    Yaya, Kamel; Bechir, Hocine

    2018-05-01

    We propose a new hyper-elastic model that is based on the standard invariants of Green-Cauchy. Experimental data reported by Treloar (Trans. Faraday Soc. 40:59, 1944) are used to identify the model parameters. To this end, the data of uni-axial tension and equi-bi-axial tension are used simultaneously. The new model has four material parameters, their identification leads to linear optimisation problem and it is able to predict multi-axial behaviour of rubber-like materials. We show that the response quality of the new model is equivalent to that of the well-known Ogden six parameters model. Thereafter, the new model is implemented in FE code. Then, we investigate the inflation of a rubber balloon with the new model and Ogden models. We compare both the analytic and numerical solutions derived from these models.

  19. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.

  20. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  1. Voxel-Based Lesion Symptom Mapping of Coarse Coding and Suppression Deficits in Patients With Right Hemisphere Damage

    PubMed Central

    Tompkins, Connie A.; Meigh, Kimberly M.; Prat, Chantel S.

    2015-01-01

    Purpose This study examined right hemisphere (RH) neuroanatomical correlates of lexical–semantic deficits that predict narrative comprehension in adults with RH brain damage. Coarse semantic coding and suppression deficits were related to lesions by voxel-based lesion symptom mapping. Method Participants were 20 adults with RH cerebrovascular accidents. Measures of coarse coding and suppression deficits were computed from lexical decision reaction times at short (175 ms) and long (1000 ms) prime-target intervals. Lesions were drawn on magnetic resonance imaging images and through normalization were registered on an age-matched brain template. Voxel-based lesion symptom mapping analysis was applied to build a general linear model at each voxel. Z score maps were generated for each deficit, and results were interpreted using automated anatomical labeling procedures. Results A deficit in coarse semantic activation was associated with lesions to the RH posterior middle temporal gyrus, dorsolateral prefrontal cortex, and lenticular nuclei. A maintenance deficit for coarsely coded representations involved the RH temporal pole and dorsolateral prefrontal cortex more medially. Ineffective suppression implicated lesions to the RH inferior frontal gyrus and subcortical regions, as hypothesized, along with the rostral temporal pole. Conclusion Beyond their scientific implications, these lesion–deficit correspondences may help inform the clinical diagnosis and enhance decisions about candidacy for deficit-focused treatment to improve narrative comprehension in individuals with RH damage. PMID:26425785

  2. Voxel-Based Lesion Symptom Mapping of Coarse Coding and Suppression Deficits in Patients With Right Hemisphere Damage.

    PubMed

    Yang, Ying; Tompkins, Connie A; Meigh, Kimberly M; Prat, Chantel S

    2015-11-01

    This study examined right hemisphere (RH) neuroanatomical correlates of lexical-semantic deficits that predict narrative comprehension in adults with RH brain damage. Coarse semantic coding and suppression deficits were related to lesions by voxel-based lesion symptom mapping. Participants were 20 adults with RH cerebrovascular accidents. Measures of coarse coding and suppression deficits were computed from lexical decision reaction times at short (175 ms) and long (1000 ms) prime-target intervals. Lesions were drawn on magnetic resonance imaging images and through normalization were registered on an age-matched brain template. Voxel-based lesion symptom mapping analysis was applied to build a general linear model at each voxel. Z score maps were generated for each deficit, and results were interpreted using automated anatomical labeling procedures. A deficit in coarse semantic activation was associated with lesions to the RH posterior middle temporal gyrus, dorsolateral prefrontal cortex, and lenticular nuclei. A maintenance deficit for coarsely coded representations involved the RH temporal pole and dorsolateral prefrontal cortex more medially. Ineffective suppression implicated lesions to the RH inferior frontal gyrus and subcortical regions, as hypothesized, along with the rostral temporal pole. Beyond their scientific implications, these lesion-deficit correspondences may help inform the clinical diagnosis and enhance decisions about candidacy for deficit-focused treatment to improve narrative comprehension in individuals with RH damage.

  3. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  4. Predicting the Performance of an Axial-Flow Compressor

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1986-01-01

    Stage-stacking computer code (STGSTK) developed for predicting off-design performance of multi-stage axial-flow compressors. Code uses meanline stagestacking method. Stage and cumulative compressor performance calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. Numerous options available within code. Code developed so user modify correlations to suit their needs.

  5. The Modified Cognitive Constructions Coding System: Reliability and Validity Assessments

    ERIC Educational Resources Information Center

    Moran, Galia S.; Diamond, Gary M.

    2006-01-01

    The cognitive constructions coding system (CCCS) was designed for coding client's expressed problem constructions on four dimensions: intrapersonal-interpersonal, internal-external, responsible-not responsible, and linear-circular. This study introduces, and examines the reliability and validity of, a modified version of the CCCS--a version that…

  6. Advanced turboprop noise prediction based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Padula, S. L.; Dunn, M. H.

    1987-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Guoyong; Budny, Robert; Gorelenkov, Nikolai

    We report here the work done for the FY14 OFES Theory Performance Target as given below: "Understanding alpha particle confinement in ITER, the world's first burning plasma experiment, is a key priority for the fusion program. In FY 2014, determine linear instability trends and thresholds of energetic particle-driven shear Alfven eigenmodes in ITER for a range of parameters and profiles using a set of complementary simulation models (gyrokinetic, hybrid, and gyrofluid). Carry out initial nonlinear simulations to assess the effects of the unstable modes on energetic particle transport". In the past year (FY14), a systematic study of the alpha-driven Alfvenmore » modes in ITER has been carried out jointly by researchers from six institutions involving seven codes including the transport simulation code TRANSP (R. Budny and F. Poli, PPPL), three gyrokinetic codes: GEM (Y. Chen, Univ. of Colorado), GTC (J. McClenaghan, Z. Lin, UCI), and GYRO (E. Bass, R. Waltz, UCSD/GA), the hybrid code M3D-K (G.Y. Fu, PPPL), the gyro-fluid code TAEFL (D. Spong, ORNL), and the linear kinetic stability code NOVA-K (N. Gorelenkov, PPPL). A range of ITER parameters and profiles are specified by TRANSP simulation of a hybrid scenario case and a steady-state scenario case. Based on the specified ITER equilibria linear stability calculations are done to determine the stability boundary of alpha-driven high-n TAEs using the five initial value codes (GEM, GTC, GYRO, M3D-K, and TAEFL) and the kinetic stability code (NOVA-K). Both the effects of alpha particles and beam ions have been considered. Finally, the effects of the unstable modes on energetic particle transport have been explored using GEM and M3D-K.« less

  8. Design of convolutional tornado code

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  9. Comparison of Space Shuttle Hot Gas Manifold analysis to air flow data

    NASA Technical Reports Server (NTRS)

    Mcconnaughey, P. K.

    1988-01-01

    This paper summarizes several recent analyses of the Space Shuttle Main Engine Hot Gas Manifold and compares predicted flow environments to air flow data. Codes used in these analyses include INS3D, PAGE, PHOENICS, and VAST. Both laminar (Re = 250, M = 0.30) and turbulent (Re = 1.9 million, M = 0.30) results are discussed, with the latter being compared to data for system losses, outer wall static pressures, and manifold exit Mach number profiles. Comparison of predicted results for the turbulent case to air flow data shows that the analysis using INS3D predicted system losses within 1 percent error, while the PHOENICS, PAGE, and VAST codes erred by 31, 35, and 47 percent, respectively. The INS3D, PHOENICS, and PAGE codes did a reasonable job of predicting outer wall static pressure, while the PHOENICS code predicted exit Mach number profiles with acceptable accuracy. INS3D was approximately an order of magnitude more efficient than the other codes in terms of code speed and memory requirements. In general, it is seen that complex internal flows in manifold-like geometries can be predicted with a limited degree of confidence, and further development is necessary to improve both efficiency and accuracy of codes if they are to be used as design tools for complex three-dimensional geometries.

  10. Linearized T-Matrix and Mie Scattering Computations

    NASA Technical Reports Server (NTRS)

    Spurr, R.; Wang, J.; Zeng, J.; Mishchenko, M. I.

    2011-01-01

    We present a new linearization of T-Matrix and Mie computations for light scattering by non-spherical and spherical particles, respectively. In addition to the usual extinction and scattering cross-sections and the scattering matrix outputs, the linearized models will generate analytical derivatives of these optical properties with respect to the real and imaginary parts of the particle refractive index, and (for non-spherical scatterers) with respect to the ''shape'' parameter (the spheroid aspect ratio, cylinder diameter/height ratio, Chebyshev particle deformation factor). These derivatives are based on the essential linearity of Maxwell's theory. Analytical derivatives are also available for polydisperse particle size distribution parameters such as the mode radius. The T-matrix formulation is based on the NASA Goddard Institute for Space Studies FORTRAN 77 code developed in the 1990s. The linearized scattering codes presented here are in FORTRAN 90 and will be made publicly available.

  11. Light transport feature for SCINFUL.

    PubMed

    Etaati, G R; Ghal-Eh, N

    2008-03-01

    An extended version of the scintillator response function prediction code SCINFUL has been developed by incorporating PHOTRACK, a Monte Carlo light transport code. Comparisons of calculated and experimental results for organic scintillators exposed to neutrons show that the extended code improves the predictive capability of SCINFUL.

  12. Comparing TCV experimental VDE responses with DINA code simulations

    NASA Astrophysics Data System (ADS)

    Favez, J.-Y.; Khayrutdinov, R. R.; Lister, J. B.; Lukash, V. E.

    2002-02-01

    The DINA free-boundary equilibrium simulation code has been implemented for TCV, including the full TCV feedback and diagnostic systems. First results showed good agreement with control coil perturbations and correctly reproduced certain non-linear features in the experimental measurements. The latest DINA code simulations, presented in this paper, exploit discharges with different cross-sectional shapes and different vertical instability growth rates which were subjected to controlled vertical displacement events (VDEs), extending previous work with the DINA code on the DIII-D tokamak. The height of the TCV vessel allows observation of the non-linear evolution of the VDE growth rate as regions of different vertical field decay index are crossed. The vertical movement of the plasma is found to be well modelled. For most experiments, DINA reproduces the S-shape of the vertical displacement in TCV with excellent precision. This behaviour cannot be modelled using linear time-independent models because of the predominant exponential shape due to the unstable pole of any linear time-independent model. The other most common equilibrium parameters like the plasma current Ip, the elongation κ, the triangularity δ, the safety factor q, the ratio between the averaged plasma kinetic pressure and the pressure of the poloidal magnetic field at the edge of the plasma βp, and the internal self inductance li also show acceptable agreement. The evolution of the growth rate γ is estimated and compared with the evolution of the closed-loop growth rate calculated with the RZIP linear model, confirming the origin of the observed behaviour.

  13. Global linear gyrokinetic particle-in-cell simulations including electromagnetic effects in shaped plasmas

    NASA Astrophysics Data System (ADS)

    Mishchenko, A.; Borchardt, M.; Cole, M.; Hatzky, R.; Fehér, T.; Kleiber, R.; Könies, A.; Zocco, A.

    2015-05-01

    We give an overview of recent developments in electromagnetic simulations based on the gyrokinetic particle-in-cell codes GYGLES and EUTERPE. We present the gyrokinetic electromagnetic models implemented in the codes and discuss further improvements of the numerical algorithm, in particular the so-called pullback mitigation of the cancellation problem. The improved algorithm is employed to simulate linear electromagnetic instabilities in shaped tokamak and stellarator plasmas, which was previously impossible for the parameters considered.

  14. Multiuser receiver for DS-CDMA signals in multipath channels: an enhanced multisurface method.

    PubMed

    Mahendra, Chetan; Puthusserypady, Sadasivan

    2006-11-01

    This paper deals with the problem of multiuser detection in direct-sequence code-division multiple-access (DS-CDMA) systems in multipath environments. The existing multiuser detectors can be divided into two categories: (1) low-complexity poor-performance linear detectors and (2) high-complexity good-performance nonlinear detectors. In particular, in channels where the orthogonality of the code sequences is destroyed by multipath, detectors with linear complexity perform much worse than the nonlinear detectors. In this paper, we propose an enhanced multisurface method (EMSM) for multiuser detection in multipath channels. EMSM is an intermediate piecewise linear detection scheme with a run-time complexity linear in the number of users. Its bit error rate performance is compared with existing linear detectors, a nonlinear radial basis function detector trained by the new support vector learning algorithm, and Verdu's optimal detector. Simulations in multipath channels, for both synchronous and asynchronous cases, indicate that it always outperforms all other linear detectors, performing nearly as well as nonlinear detectors.

  15. GPU Linear Algebra Libraries and GPGPU Programming for Accelerating MOPAC Semiempirical Quantum Chemistry Calculations.

    PubMed

    Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B

    2012-09-11

    In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.

  16. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  17. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  18. Linear-Algebra Programs

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.

    1982-01-01

    The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.

  19. Characterizing the Properties of a Woven SiC/SiC Composite Using W-CEMCAN Computer Code

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.

    1999-01-01

    A micromechanics based computer code to predict the thermal and mechanical properties of woven ceramic matrix composites (CMC) is developed. This computer code, W-CEMCAN (Woven CEramic Matrix Composites ANalyzer), predicts the properties of two-dimensional woven CMC at any temperature and takes into account various constituent geometries and volume fractions. This computer code is used to predict the thermal and mechanical properties of an advanced CMC composed of 0/90 five-harness (5 HS) Sylramic fiber which had been chemically vapor infiltrated (CVI) with boron nitride (BN) and SiC interphase coatings and melt-infiltrated (MI) with SiC. The predictions, based on the bulk constituent properties from the literature, are compared with measured experimental data. Based on the comparison. improved or calibrated properties for the constituent materials are then developed for use by material developers/designers. The computer code is then used to predict the properties of a composite with the same constituents but with different fiber volume fractions. The predictions are compared with measured data and a good agreement is achieved.

  20. Influence of thickness and camber on the aeroelastic stability of supersonic throughflow fans: An engineering approach

    NASA Technical Reports Server (NTRS)

    Ramsey, John K.

    1989-01-01

    An engineering approach was used to include the nonlinear effects of thickness and camber in an analytical aeroelastic analysis of cascades in supersonic acial flow (supersonic leading-edge locus). A hybrid code using Lighthill's nonlinear piston theory and Lanes's linear potential theory was developed to include these nonlinear effects. Lighthill's theory was used to calculate the unsteady pressures on the noninterference surface regions of the airfoils in cascade. Lane's theory was used to calculate the unsteady pressures on the remaining interference surface regions. Two airfoil profiles was investigated (a supersonic throughflow fan design and a NACA 66-206 airfoil with a sharp leading edge). Results show that compared with predictions of Lane's potential theory for flat plates, the inclusion of thickness (with or without camber) may increase or decrease the aeroelastic stability, depending on the airfoil geometry and operating conditions. When thickness effects are included in the aeroelastic analysis, inclusion of camber will influence the predicted stability in proportion to the magnitude of the added camber. The critical interblade phase angle, depending on the airfoil profile and operating conditions, may also be influenced by thickness and camber. Compared with predictions of Lane's linear potential theory, the inclusion of thickness and camber decreased the aerodynamic stifness and increased the aerodynamic damping at Mach 2 and 2.95 for a cascade of supersonic throughflow fan airfoils oscillating 180 degrees out of phase at a reduced frequency of 0.1.

  1. Development of probabilistic design method for annular fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozawa, Takayuki

    2007-07-01

    The increase of linear power and burn-up during the reactor operation is considered as one measure to ensure the utility of fast reactors in the future; for this the application of annular oxide fuels is under consideration. The annular fuel design code CEPTAR was developed in the Japan Atomic Energy Agency (JAEA) and verified by using many irradiation experiences with oxide fuels. In addition, the probabilistic fuel design code BORNFREE was also developed to provide a safe and reasonable fuel design and to evaluate the design margins quantitatively. This study aimed at the development of a probabilistic design method formore » annular oxide fuels; this was implemented in the developed BORNFREE-CEPTAR code, and the code was used to make a probabilistic evaluation with regard to the permissive linear power. (author)« less

  2. Modeling Seismoacoustic Propagation from the Nonlinear to Linear Regimes

    NASA Astrophysics Data System (ADS)

    Chael, E. P.; Preston, L. A.

    2015-12-01

    Explosions at shallow depth-of-burial can cause nonlinear material response, such as fracturing and spalling, up to the ground surface above the shot point. These motions at the surface affect the generation of acoustic waves into the atmosphere, as well as the surface-reflected compressional and shear waves. Standard source scaling models for explosions do not account for such nonlinear interactions above the shot, while some recent studies introduce a non-isotropic addition to the moment tensor to represent them (e.g., Patton and Taylor, 2011). We are using Sandia's CTH shock physics code to model the material response in the vicinity of underground explosions, up to the overlying ground surface. Across a boundary where the motions have decayed to nearly linear behavior, we couple the signals from CTH into a linear finite-difference (FD) seismoacoustic code to efficiently propagate the wavefields to greater distances. If we assume only one-way transmission of energy through the boundary, then the particle velocities there suffice as inputs for the FD code, simplifying the specification of the boundary condition. The FD algorithm we use applies the wave equations for velocity in an elastic medium and pressure in an acoustic one, and matches the normal traction and displacement across the interface. Initially we are developing and testing a 2D, axisymmetric seismoacoustic routine; CTH can use this geometry in the source region as well. The Source Physics Experiment (SPE) in Nevada has collected seismic and acoustic data on numerous explosions at different scaled depths, providing an excellent testbed for investigating explosion phenomena (Snelson et al., 2013). We present simulations for shots SPE-4' and SPE-5, illustrating the importance of nonlinear behavior up to the ground surface. Our goal is to develop the capability for accurately predicting the relative signal strengths in the air and ground for a given combination of source yield and depth. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. T-matrix modeling of linear depolarization by morphologically complex soot and soot-containing aerosols

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-07-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with ground-based, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  4. T-Matrix Modeling of Linear Depolarization by Morphologically Complex Soot and Soot-Containing Aerosols

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-01-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with groundbased, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders.

  5. Nonlinear modeling of forced magnetic reconnection in slab geometry with NIMROD

    NASA Astrophysics Data System (ADS)

    Beidler, M. T.; Callen, J. D.; Hegna, C. C.; Sovinec, C. R.

    2017-05-01

    The nonlinear, extended-magnetohydrodynamic (MHD) code NIMROD is benchmarked with the theory of time-dependent forced magnetic reconnection induced by small resonant fields in slab geometry in the context of visco-resistive MHD modeling. Linear computations agree with time-asymptotic, linear theory of flow screening of externally applied fields. The inclusion of flow in nonlinear computations can result in mode penetration due to the balance between electromagnetic and viscous forces in the time-asymptotic state, which produces bifurcations from a high-slip state to a low-slip state as the external field is slowly increased. We reproduce mode penetration and unlocking transitions by employing time-dependent externally applied magnetic fields. Mode penetration and unlocking exhibit hysteresis and occur at different magnitudes of applied field. We also establish how nonlinearly determined flow screening of the resonant field is affected by the square of the magnitude of the externally applied field. These results emphasize that the inclusion of nonlinear physics is essential for accurate prediction of the reconnected field in a flowing plasma.

  6. Use of the International Classification of Diseases, 9th revision, coding in identifying chronic hepatitis B virus infection in health system data: implications for national surveillance.

    PubMed

    Mahajan, Reena; Moorman, Anne C; Liu, Stephen J; Rupp, Loralee; Klevens, R Monina

    2013-05-01

    With increasing use electronic health records (EHR) in the USA, we looked at the predictive values of the International Classification of Diseases, 9th revision (ICD-9) coding system for surveillance of chronic hepatitis B virus (HBV) infection. The chronic HBV cohort from the Chronic Hepatitis Cohort Study was created based on electronic health records (EHR) of adult patients who accessed services from 2006 to 2008 from four healthcare systems in the USA. Using the gold standard of abstractor review to confirm HBV cases, we calculated the sensitivity, specificity, positive and negative predictive values using one qualifying ICD-9 code versus using two qualifying ICD-9 codes separated by 6 months or greater. Of 1 652 055 adult patients, 2202 (0.1%) were confirmed as having chronic HBV. Use of one ICD-9 code had a sensitivity of 83.9%, positive predictive value of 61.0%, and specificity and negative predictive values greater than 99%. Use of two hepatitis B-specific ICD-9 codes resulted in a sensitivity of 58.4% and a positive predictive value of 89.9%. Use of one or two hepatitis B ICD-9 codes can identify cases with chronic HBV infection with varying sensitivity and positive predictive values. As the USA increases the use of EHR, surveillance using ICD-9 codes may be reliable to determine the burden of chronic HBV infection and would be useful to improve reporting by state and local health departments.

  7. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  8. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  9. Equivalent Linearization Analysis of Geometrically Nonlinear Random Vibrations Using Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2002-01-01

    Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.

  10. Full-Process Computer Model of Magnetron Sputter, Part I: Test Existing State-of-Art Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walton, C C; Gilmer, G H; Wemhoff, A P

    2007-09-26

    This work is part of a larger project to develop a modeling capability for magnetron sputter deposition. The process is divided into four steps: plasma transport, target sputter, neutral gas and sputtered atom transport, and film growth, shown schematically in Fig. 1. Each of these is simulated separately in this Part 1 of the project, which is jointly funded between CMLS and Engineering. The Engineering portion is the plasma modeling, in step 1. The plasma modeling was performed using the Object-Oriented Particle-In-Cell code (OOPIC) from UC Berkeley [1]. Figure 2 shows the electron density in the simulated region, using magneticmore » field strength input from experiments by Bohlmark [2], where a scale of 1% is used. Figures 3 and 4 depict the magnetic field components that were generated using two-dimensional linear interpolation of Bohlmark's experimental data. The goal of the overall modeling tool is to understand, and later predict, relationships between parameters of film deposition we can change (such as gas pressure, gun voltage, and target-substrate distance) and key properties of the results (such as film stress, density, and stoichiometry.) The simulation must use existing codes, either open-source or low-cost, not develop new codes. In part 1 (FY07) we identified and tested the best available code for each process step, then determined if it can cover the size and time scales we need in reasonable computation times. We also had to determine if the process steps are sufficiently decoupled that they can be treated separately, and identify any research-level issues preventing practical use of these codes. Part 2 will consider whether the codes can be (or need to be) made to talk to each other and integrated into a whole.« less

  11. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A 3-D code for viscous cascade flow prediction was developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full multigrid method. The Baldwin-Lomax eddy viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  12. Implementing LPC (Linear Predictive Coding) Algorithms in the Study of Speech Processing.

    DTIC Science & Technology

    1983-12-01

    DRAND(IX) DOUBLE PRECISION INTEGER IXAP.B15.B165XHI 5XALO.LEFTLO,FHIK DATA A/1&607D0/. B15/3276BD0/. Bl6 /65536D0/. P/2147483647D0/ XHI - IX/ B16 XHI...XHI -DMOD(XHI, IDO) XALO = (IX- XHI * B16 ) * A LEFTLO -XALO/1116 LEFTLO = LEFTLO - DMOD(LEFTLO. iDO)I FHI -XHI * A + LEFTLO K = FHI/B15 K = K - DMOD(K1...iDO) IX = (((XALO-LEFTL0*916)-P) +- (FHI-K*Bl5)* B16 )+K IF(IX.LT.O.DOe IX= X *FILENAME: GLOTI.FR DATE: 12: 2:83 TIME: 13:45:38 PAGE C C THIS SUBROUTINE

  13. Simulating of the measurement-device independent quantum key distribution with phase randomized general sources

    PubMed Central

    Wang, Qin; Wang, Xiang-Bin

    2014-01-01

    We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000

  14. Multigrid calculation of three-dimensional viscous cascade flows

    NASA Technical Reports Server (NTRS)

    Arnone, A.; Liou, M.-S.; Povinelli, L. A.

    1991-01-01

    A three-dimensional code for viscous cascade flow prediction has been developed. The space discretization uses a cell-centered scheme with eigenvalue scaling to weigh the artificial dissipation terms. Computational efficiency of a four-stage Runge-Kutta scheme is enhanced by using variable coefficients, implicit residual smoothing, and a full-multigrid method. The Baldwin-Lomax eddy-viscosity model is used for turbulence closure. A zonal, nonperiodic grid is used to minimize mesh distortion in and downstream of the throat region. Applications are presented for an annular vane with and without end wall contouring, and for a large-scale linear cascade. The calculation is validated by comparing with experiments and by studying grid dependency.

  15. Winglets on low aspect ratio wings

    NASA Technical Reports Server (NTRS)

    Kuhlman, John M.; Liaw, Paul

    1987-01-01

    The drag reduction potentially available from the use of winglets at the tips of low aspect ratio (1.75-2.67) wings with pronounced (45-60 deg) leading edge sweep is assessed numerically for the case of a cruise design point at Mach of 0.8 and a lift coefficient of 0.3. Both wing-winglet and wing-alone design geometries are derived from a linear-theory, minimum induced drag design methodology. Relative performance is evaluated with a nonlinear extended small disturbance potential flow analysis code. Predicted lift coefficient/pressure drag coefficient increases at equal lift for the wing-winglet configurations over the wing-alone planform are of the order of 14.6-15.8, when boundary layer interaction is included.

  16. A parallel and modular deformable cell Car-Parrinello code

    NASA Astrophysics Data System (ADS)

    Cavazzoni, Carlo; Chiarotti, Guido L.

    1999-12-01

    We have developed a modular parallel code implementing the Car-Parrinello [Phys. Rev. Lett. 55 (1985) 2471] algorithm including the variable cell dynamics [Europhys. Lett. 36 (1994) 345; J. Phys. Chem. Solids 56 (1995) 510]. Our code is written in Fortran 90, and makes use of some new programming concepts like encapsulation, data abstraction and data hiding. The code has a multi-layer hierarchical structure with tree like dependences among modules. The modules include not only the variables but also the methods acting on them, in an object oriented fashion. The modular structure allows easier code maintenance, develop and debugging procedures, and is suitable for a developer team. The layer structure permits high portability. The code displays an almost linear speed-up in a wide range of number of processors independently of the architecture. Super-linear speed up is obtained with a "smart" Fast Fourier Transform (FFT) that uses the available memory on the single node (increasing for a fixed problem with the number of processing elements) as temporary buffer to store wave function transforms. This code has been used to simulate water and ammonia at giant planet conditions for systems as large as 64 molecules for ˜50 ps.

  17. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    Decoding algorithms based on the trellis representation of a code (block or convolutional) drastically reduce decoding complexity. The best known and most commonly used trellis-based decoding algorithm is the Viterbi algorithm. It is a maximum likelihood decoding algorithm. Convolutional codes with the Viterbi decoding have been widely used for error control in digital communications over the last two decades. This chapter is concerned with the application of the Viterbi decoding algorithm to linear block codes. First, the Viterbi algorithm is presented. Then, optimum sectionalization of a trellis to minimize the computational complexity of a Viterbi decoder is discussed and an algorithm is presented. Some design issues for IC (integrated circuit) implementation of a Viterbi decoder are considered and discussed. Finally, a new decoding algorithm based on the principle of compare-select-add is presented. This new algorithm can be applied to both block and convolutional codes and is more efficient than the conventional Viterbi algorithm based on the add-compare-select principle. This algorithm is particularly efficient for rate 1/n antipodal convolutional codes and their high-rate punctured codes. It reduces computational complexity by one-third compared with the Viterbi algorithm.

  18. Polar codes for achieving the classical capacity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Guha, Saikat; Wilde, Mark

    2012-02-01

    We construct the first near-explicit, linear, polar codes that achieve the capacity for classical communication over quantum channels. The codes exploit the channel polarization phenomenon observed by Arikan for classical channels. Channel polarization is an effect in which one can synthesize a set of channels, by ``channel combining'' and ``channel splitting,'' in which a fraction of the synthesized channels is perfect for data transmission while the other fraction is completely useless for data transmission, with the good fraction equal to the capacity of the channel. Our main technical contributions are threefold. First, we demonstrate that the channel polarization effect occurs for channels with classical inputs and quantum outputs. We then construct linear polar codes based on this effect, and the encoding complexity is O(N log N), where N is the blocklength of the code. We also demonstrate that a quantum successive cancellation decoder works well, i.e., the word error rate decays exponentially with the blocklength of the code. For a quantum channel with binary pure-state outputs, such as a binary-phase-shift-keyed coherent-state optical communication alphabet, the symmetric Holevo information rate is in fact the ultimate channel capacity, which is achieved by our polar code.

  19. Data integration of structured and unstructured sources for assigning clinical codes to patient stays

    PubMed Central

    Luyckx, Kim; Luyten, Léon; Daelemans, Walter; Van den Bulcke, Tim

    2016-01-01

    Objective Enormous amounts of healthcare data are becoming increasingly accessible through the large-scale adoption of electronic health records. In this work, structured and unstructured (textual) data are combined to assign clinical diagnostic and procedural codes (specifically ICD-9-CM) to patient stays. We investigate whether integrating these heterogeneous data types improves prediction strength compared to using the data types in isolation. Methods Two separate data integration approaches were evaluated. Early data integration combines features of several sources within a single model, and late data integration learns a separate model per data source and combines these predictions with a meta-learner. This is evaluated on data sources and clinical codes from a broad set of medical specialties. Results When compared with the best individual prediction source, late data integration leads to improvements in predictive power (eg, overall F-measure increased from 30.6% to 38.3% for International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes), while early data integration is less consistent. The predictive strength strongly differs between medical specialties, both for ICD-9-CM diagnostic and procedural codes. Discussion Structured data provides complementary information to unstructured data (and vice versa) for predicting ICD-9-CM codes. This can be captured most effectively by the proposed late data integration approach. Conclusions We demonstrated that models using multiple electronic health record data sources systematically outperform models using data sources in isolation in the task of predicting ICD-9-CM codes over a broad range of medical specialties. PMID:26316458

  20. Turbulence Modeling for Shock Wave/Turbulent Boundary Layer Interactions

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.

    2011-01-01

    Accurate aerodynamic computational predictions are essential for the safety of space vehicles, but these computations are of limited accuracy when large pressure gradients are present in the flow. The goal of the current project is to improve the state of compressible turbulence modeling for high speed flows with shock wave / turbulent boundary layer interactions (SWTBLI). Emphasis will be placed on models that can accurately predict the separated region caused by the SWTBLI. These flows are classified as nonequilibrium boundary layers because of the very large and variable adverse pressure gradients caused by the shock waves. The lag model was designed to model these nonequilibrium flows by incorporating history effects. Standard one- and two-equation models (Spalart Allmaras and SST) and the lag model will be run and compared to a new lag model. This new model, the Reynolds stress tensor lag model (lagRST), will be assessed against multiple wind tunnel tests and correlations. The basis of the lag and lagRST models are to preserve the accuracy of the standard turbulence models in equilibrium turbulence, when the Reynolds stresses are linearly related to the mean strain rates, but create a lag between mean strain rate effects and turbulence when nonequilibrium effects become important, such as in large pressure gradients. The affect this lag has on the results for SWBLI and massively separated flows will be determined. These computations will be done with a modified version of the OVERFLOW code. This code solves the RANS equations on overset grids. It was used for this study for its ability to input very complex geometries into the flow solver, such as the Space Shuttle in the full stack configuration. The model was successfully implemented within two versions of the OVERFLOW code. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWBLI assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  1. Low-Density Parity-Check (LDPC) Codes Constructed from Protographs

    NASA Astrophysics Data System (ADS)

    Thorpe, J.

    2003-08-01

    We introduce a new class of low-density parity-check (LDPC) codes constructed from a template called a protograph. The protograph serves as a blueprint for constructing LDPC codes of arbitrary size whose performance can be predicted by analyzing the protograph. We apply standard density evolution techniques to predict the performance of large protograph codes. Finally, we use a randomized search algorithm to find good protographs.

  2. Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, T. F.; Lawrence Livermore National Laboratory, Livermore, California 94550; Xu, X. Q.

    2016-03-15

    A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan basedmore » on “Cyclone base case parameter set.” We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.« less

  3. Nonlinear Analysis of Airfoil High-Intensity Gust Response Using a High-Order Prefactored Compact Code

    NASA Technical Reports Server (NTRS)

    Crivellini, A.; Golubev, V.; Mankbadi, R.; Scott, J. R.; Hixon, R.; Povinelli, L.; Kiraly, L. James (Technical Monitor)

    2002-01-01

    The nonlinear response of symmetric and loaded airfoils to an impinging vortical gust is investigated in the parametric space of gust dimension, intensity, and frequency. The study, which was designed to investigate the validity limits for a linear analysis, is implemented by applying a nonlinear high-order prefactored compact code and comparing results with linear solutions from the GUST3D frequency-domain solver. Both the unsteady aerodynamic and acoustic gust responses are examined.

  4. Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements

    NASA Astrophysics Data System (ADS)

    Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.

    2017-02-01

    Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. We report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. We demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ±0.018 , and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.

  5. Correcting quantum errors with entanglement.

    PubMed

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  6. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  7. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.

  8. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  9. Read-Write-Codes: An Erasure Resilient Encoding System for Flexible Reading and Writing in Storage Networks

    NASA Astrophysics Data System (ADS)

    Mense, Mario; Schindelhauer, Christian

    We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.

  10. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW). Weights information is obtained from correlations of data from three sources: 1) as-built initial structural and non-structural weights from an existing database, 2) theoretical FEM structural weights and sensitivities from Genesis, and 3) empirical as-built weight increments, non-structural weights, and weight sensitivities from FLOPS. For the aeroelastic analysis, a variable-fidelity aerodynamic analysis has been adopted. This approach uses infrequent CPU-intensive non-linear CFD to calculate a non-linear correction relative to a linear aero calculation for the same aerodynamic surface at an angle of attack that results in the same configuration lift. For efficiency, this nonlinear correction is applied after each subsequent linear aero solution during the iterations between the aerodynamic and structural analyses. Convergence is achieved when the vehicle shape being used for the aerodynamic calculations is consistent with the structural deformations caused by the aerodynamic loads. To make the structural analyses more efficient, a linearized structural deformation model has been adopted, in which a single stiffness matrix can be used to solve for the deformations under all the load conditions. Using the converged aerodynamic loads, a final set of structural analyses are performed to determine the stress distributions and the buckling conditions for constraint calculation. Performance constraints are obtained by running FLOPS using drag polars that are computed using results from non-linear corrections to the linear aero code plus several codes to provide drag increments due to skin friction, wave drag, and other miscellaneous drag contributions. The status of the integration effort will be presented in the proposed paper, and results will be provided that illustrate the degree of accuracy in the linearizations that have been employed.

  11. Comparison of the performance of mental health, drug and alcohol comorbidities based on ICD-10-AM and medical records for predicting 12-month outcomes in trauma patients.

    PubMed

    Nguyen, Tu Q; Simpson, Pamela M; Braaf, Sandra C; Cameron, Peter A; Judson, Rodney; Gabbe, Belinda J

    2018-06-05

    Many outcome studies capture the presence of mental health, drug and alcohol comorbidities from administrative datasets and medical records. How these sources compare as predictors of patient outcomes has not been determined. The purpose of the present study was to compare mental health, drug and alcohol comorbidities based on ICD-10-AM coding and medical record documentation for predicting longer-term outcomes in injured patients. A random sample of patients (n = 500) captured by the Victorian State Trauma Registry was selected for the study. Retrospective medical record reviews were conducted to collect data about documented mental health, drug and alcohol comorbidities while ICD-10-AM codes were obtained from routinely collected hospital data. Outcomes at 12-months post-injury were the Glasgow Outcome Scale - Extended (GOS-E), European Quality of Life Five Dimensions (EQ-5D-3L), and return to work. Linear and logistic regression models, adjusted for age and gender, using medical record derived comorbidity and ICD-10-AM were compared using measures of calibration (Hosmer-Lemeshow statistic) and discrimination (C-statistic and R 2 ). There was no demonstrable difference in predictive performance between the medical record and ICD-10-AM models for predicting the GOS-E, EQ-5D-3L utility sore and EQ-5D-3L mobility, self-care, usual activities and pain/discomfort items. The area under the receiver operating characteristic (AUC) for models using medical record derived comorbidity (AUC 0.68, 95% CI: 0.63, 0.73) was higher than the model using ICD-10-AM data (AUC 0.62, 95% CI: 0.57, 0.67) for predicting the EQ-5D-3L anxiety/depression item. The discrimination of the model for predicting return to work was higher with inclusion of the medical record data (AUC 0.69, 95% CI: 0.63, 0.76) than the ICD-10-AM data (AUC 0.59, 95% CL: 0.52, 0.65). Mental health, drug and alcohol comorbidity information derived from medical record review was not clearly superior for predicting the majority of the outcomes assessed when compared to ICD-10-AM. While information available in medical records may be more comprehensive than in the ICD-10-AM, there appears to be little difference in the discriminative capacity of comorbidities coded in the two sources.

  12. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  13. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  14. Numerical Calculations of 3-D High-Lift Flows and Comparison with Experiment

    NASA Technical Reports Server (NTRS)

    Compton, William B, III

    2015-01-01

    Solutions were obtained with the Navier-Stokes CFD code TLNS3D to predict the flow about the NASA Trapezoidal Wing, a high-lift wing composed of three elements: the main-wing element, a deployed leading-edge slat, and a deployed trailing-edge flap. Turbulence was modeled by the Spalart-Allmaras one-equation turbulence model. One case with massive separation was repeated using Menter's two-equation SST (Menter's Shear Stress Transport) k-omega turbulence model in an attempt to improve the agreement with experiment. The investigation was conducted at a free stream Mach number of 0.2, and at angles of attack ranging from 10.004 degrees to 34.858 degrees. The Reynolds number based on the mean aerodynamic chord of the wing was 4.3 x 10 (sup 6). Compared to experiment, the numerical procedure predicted the surface pressures very well at angles of attack in the linear range of the lift. However, computed maximum lift was 5% low. Drag was mainly under predicted. The procedure correctly predicted several well-known trends and features of high-lift flows, such as off-body separation. The two turbulence models yielded significantly different solutions for the repeated case.

  15. Power prediction in mobile communication systems using an optimal neural-network structure.

    PubMed

    Gao, X M; Gao, X Z; Tanskanen, J A; Ovaska, S J

    1997-01-01

    Presents a novel neural-network-based predictor for received power level prediction in direct sequence code division multiple access (DS/CDMA) systems. The predictor consists of an adaptive linear element (Adaline) followed by a multilayer perceptron (MLP). An important but difficult problem in designing such a cascade predictor is to determine the complexity of the networks. We solve this problem by using the predictive minimum description length (PMDL) principle to select the optimal numbers of input and hidden nodes. This approach results in a predictor with both good noise attenuation and excellent generalization capability. The optimized neural networks are used for predictive filtering of very noisy Rayleigh fading signals with 1.8 GHz carrier frequency. Our results show that the optimal neural predictor can provide smoothed in-phase and quadrature signals with signal-to-noise ratio (SNR) gains of about 12 and 7 dB at the urban mobile speeds of 5 and 50 km/h, respectively. The corresponding power signal SNR gains are about 11 and 5 dB. Therefore, the neural predictor is well suitable for power control applications where ldquodelaylessrdquo noise attenuation and efficient reduction of fast fading are required.

  16. Ex-Vessel Core Melt Modeling Comparison between MELTSPREAD-CORQUENCH and MELCOR 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robb, Kevin R.; Farmer, Mitchell; Francis, Matthew W.

    System-level code analyses by both United States and international researchers predict major core melting, bottom head failure, and corium-concrete interaction for Fukushima Daiichi Unit 1 (1F1). Although system codes such as MELCOR and MAAP are capable of capturing a wide range of accident phenomena, they currently do not contain detailed models for evaluating some ex-vessel core melt behavior. However, specialized codes containing more detailed modeling are available for melt spreading such as MELTSPREAD as well as long-term molten corium-concrete interaction (MCCI) and debris coolability such as CORQUENCH. In a preceding study, Enhanced Ex-Vessel Analysis for Fukushima Daiichi Unit 1: Meltmore » Spreading and Core-Concrete Interaction Analyses with MELTSPREAD and CORQUENCH, the MELTSPREAD-CORQUENCH codes predicted the 1F1 core melt readily cooled in contrast to predictions by MELCOR. The user community has taken notice and is in the process of updating their systems codes; specifically MAAP and MELCOR, to improve and reduce conservatism in their ex-vessel core melt models. This report investigates why the MELCOR v2.1 code, compared to the MELTSPREAD and CORQUENCH 3.03 codes, yield differing predictions of ex-vessel melt progression. To accomplish this, the differences in the treatment of the ex-vessel melt with respect to melt spreading and long-term coolability are examined. The differences in modeling approaches are summarized, and a comparison of example code predictions is provided.« less

  17. NAS Experiences of Porting CM Fortran Codes to HPF on IBM SP2 and SGI Power Challenge

    NASA Technical Reports Server (NTRS)

    Saini, Subhash

    1995-01-01

    Current Connection Machine (CM) Fortran codes developed for the CM-2 and the CM-5 represent an important class of parallel applications. Several users have employed CM Fortran codes in production mode on the CM-2 and the CM-5 for the last five to six years, constituting a heavy investment in terms of cost and time. With Thinking Machines Corporation's decision to withdraw from the hardware business and with the decommissioning of many CM-2 and CM-5 machines, the best way to protect the substantial investment in CM Fortran codes is to port the codes to High Performance Fortran (HPF) on highly parallel systems. HPF is very similar to CM Fortran and thus represents a natural transition. Conversion issues involved in porting CM Fortran codes on the CM-5 to HPF are presented. In particular, the differences between data distribution directives and the CM Fortran Utility Routines Library, as well as the equivalent functionality in the HPF Library are discussed. Several CM Fortran codes (Cannon algorithm for matrix-matrix multiplication, Linear solver Ax=b, 1-D convolution for 2-D datasets, Laplace's Equation solver, and Direct Simulation Monte Carlo (DSMC) codes have been ported to Subset HPF on the IBM SP2 and the SGI Power Challenge. Speedup ratios versus number of processors for the Linear solver and DSMC code are presented.

  18. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  19. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, C. -Y.; Douglas, D.; Li, R.

    Microbunching instability (MBI) has been one of the most challenging issues in designs of magnetic chicanes for short-wavelength free-electron lasers or linear colliders, as well as those of transport lines for recirculating or energy-recovery-linac machines. To quantify MBI for a recirculating machine and for more systematic analyses, we have recently developed a linear Vlasov solver and incorporated relevant collective effects into the code, including the longitudinal space charge, coherent synchrotron radiation, and linac geometric impedances, with extension of the existing formulation to include beam acceleration. In our code, we semianalytically solve the linearized Vlasov equation for microbunching amplification factor formore » an arbitrary linear lattice. In this study we apply our code to beam line lattices of two comparative isochronous recirculation arcs and one arc lattice preceded by a linac section. The resultant microbunching gain functions and spectral responses are presented, with some results compared to particle tracking simulation by elegant (M. Borland, APS Light Source Note No. LS-287, 2002). These results demonstrate clearly the impact of arc lattice design on the microbunching development. Lastly, the underlying physics with inclusion of those collective effects is elucidated and the limitation of the existing formulation is also discussed.« less

  1. Control Law Design in a Computational Aeroelasticity Environment

    NASA Technical Reports Server (NTRS)

    Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.

    2003-01-01

    A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.

  2. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6

  3. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA

  4. High fidelity CFD-CSD aeroelastic analysis of slender bladed horizontal-axis wind turbine

    NASA Astrophysics Data System (ADS)

    Sayed, M.; Lutz, Th.; Krämer, E.; Shayegan, Sh.; Ghantasala, A.; Wüchner, R.; Bletzinger, K.-U.

    2016-09-01

    The aeroelastic response of large multi-megawatt slender horizontal-axis wind turbine blades is investigated by means of a time-accurate CFD-CSD coupling approach. A loose coupling approach is implemented and used to perform the simulations. The block- structured CFD solver FLOWer is utilized to obtain the aerodynamic blade loads based on the time-accurate solution of the unsteady Reynolds-averaged Navier-Stokes equations. The CSD solver Carat++ is applied to acquire the blade elastic deformations based on non-linear beam elements. In this contribution, the presented coupling approach is utilized to study the aeroelastic response of the generic DTU 10MW wind turbine. Moreover, the effect of the coupled results on the wind turbine performance is discussed. The results are compared to the aeroelastic response predicted by FLOWer coupled to the MBS tool SIMPACK as well as the response predicted by SIMPACK coupled to a Blade Element Momentum code for aerodynamic predictions. A comparative study among the different modelling approaches for this coupled problem is discussed to quantify the coupling effects of the structural models on the aeroelastic response.

  5. Summary of recent NASA propeller research

    NASA Technical Reports Server (NTRS)

    Mikkelson, D. C.; Mitchell, G. A.; Bober, L. J.

    1984-01-01

    Advanced high-speed propellers offer large performance improvements for aircraft that cruise in the Mach 0.7 to 0.8 speed regime. At these speeds, studies indicate that there is a 15 to near 40 percent block fuel savings and associated operating cost benefits for advanced turboprops compared to equivalent technology turbofan powered aircraft. Recent wind tunnel results for five eight to ten blade advanced models are compared with analytical predictions. Test results show that blade sweep was important in achieving net efficiencies near 80 percent at Mach 0.8 and reducing nearfield cruise noise by about 6 dB. Lifting line and lifting surface aerodynamic analysis codes are under development and some results are compared with propeller force and probe data. Also, analytical predictions are compared with some initial laser velocimeter measurements of the flow field velocities of an eightbladed 45 swept propeller. Experimental aeroelastic results indicate that cascade effects and blade sweep strongly affect propeller aeroelastic characteristics. Comparisons of propeller near-field noise data with linear acoustic theory indicate that the theory adequately predicts near-field noise for subsonic tip speeds but overpredicts the noise for supersonic tip speeds.

  6. Summary of recent NASA propeller research

    NASA Technical Reports Server (NTRS)

    Mikkelson, D. C.; Mitchell, G. A.; Bober, L. J.

    1985-01-01

    Advanced high speed propellers offer large performance improvements for aircraft that cruise in the Mach 0.7 to 0.8 speed regime. At these speeds, studies indicate that there is a 15 to near 40 percent block fuel savings and associated operating cost benefits for advanced turboprops compared to equivalent technology turbofan powered aircraft. Recent wind tunnel results for five eight to ten blade advanced models are compared with analytical predictions. Test results show that blade sweep was important in achieving net efficiencies near 80 percent at Mach 0.8 and reducing nearfield cruise noise about 6 dB. Lifting line and lifting surface aerodynamic analysis codes are under development and some results are compared with propeller force and probe data. Also, analytical predictions are compared with some initial laser velocimeter measurements of the flow field velocities of an eight bladed 45 swept propeller. Experimental aeroelastic results indicate that cascade effects and blade sweep strongly affect propeller aeroelastic characteristics. Comparisons of propeller nearfield noise data with linear acoustic theory indicate that the theory adequately predicts nearfield noise for subsonic tip speeds, but overpredicts the noise for supersonic tip speeds.

  7. Effects of resistivity and rotation on the linear plasma response to non-axisymmetric magnetic perturbations on DIII-D

    DOE PAGES

    Haskey, Shaun R.; Lanctot, Matthew J.; Liu, Y. Q.; ...

    2015-01-05

    Parameter scans show the strong dependence of the plasma response on the poloidal structure of the applied field highlighting the importance of being able to control this parameter using non-axisymmetric coil sets. An extensive examination of the linear single fluid plasma response to n = 3 magnetic perturbations in L-mode DIII-D lower single null plasmas is presented. The effects of plasma resistivity, toroidal rotation and applied field structure are calculated using the linear single fluid MHD code, MARS-F. Measures which separate the response into a pitch-resonant and resonant field amplification (RFA) component are used to demonstrate the extent to whichmore » resonant screening and RFA occurs. The ability to control the ratio of pitch-resonant fields to RFA by varying the phasing between upper and lower resonant magnetic perturbations coils sets is shown. The predicted magnetic probe outputs and displacement at the x-point are also calculated for comparison with experiments. Additionally, modelling of the linear plasma response using experimental toroidal rotation profiles and Spitzer like resistivity profiles are compared with results which provide experimental evidence of a direct link between the decay of the resonant screening response and the formation of a 3D boundary. As a result, good agreement is found during the initial application of the MP, however, later in the shot a sudden drop in the poloidal magnetic probe output occurs which is not captured in the linear single fluid modelling.« less

  8. Numerical predictions of EML (electromagnetic launcher) system performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnurr, N.M.; Kerrisk, J.F.; Davidson, R.F.

    1987-01-01

    The performance of an electromagnetic launcher (EML) depends on a large number of parameters, including the characteristics of the power supply, rail geometry, rail and insulator material properties, injection velocity, and projectile mass. EML system performance is frequently limited by structural or thermal effects in the launcher (railgun). A series of computer codes has been developed at the Los Alamos National Laboratory to predict EML system performance and to determine the structural and thermal constraints on barrel design. These codes include FLD, a two-dimensional electrostatic code used to calculate the high-frequency inductance gradient and surface current density distribution for themore » rails; TOPAZRG, a two-dimensional finite-element code that simultaneously analyzes thermal and electromagnetic diffusion in the rails; and LARGE, a code that predicts the performance of the entire EML system. Trhe NIKE2D code, developed at the Lawrence Livermore National Laboratory, is used to perform structural analyses of the rails. These codes have been instrumental in the design of the Lethality Test System (LTS) at Los Alamos, which has an ultimate goal of accelerating a 30-g projectile to a velocity of 15 km/s. The capabilities of the individual codes and the coupling of these codes to perform a comprehensive analysis is discussed in relation to the LTS design. Numerical predictions are compared with experimental data and presented for the LTS prototype tests.« less

  9. Bayesian decision support for coding occupational injury data.

    PubMed

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  10. Hyperbolic/parabolic development for the GIM-STAR code. [flow fields in supersonic inlets

    NASA Technical Reports Server (NTRS)

    Spradley, L. W.; Stalnaker, J. F.; Ratliff, A. W.

    1980-01-01

    Flow fields in supersonic inlet configurations were computed using the eliptic GIM code on the STAR computer. Spillage flow under the lower cowl was calculated to be 33% of the incoming stream. The shock/boundary layer interaction on the upper propulsive surface was computed including separation. All shocks produced by the flow system were captured. Linearized block implicit (LBI) schemes were examined to determine their application to the GIM code. Pure explicit methods have stability limitations and fully implicit schemes are inherently inefficient; however, LBI schemes show promise as an effective compromise. A quasiparabolic version of the GIM code was developed using elastical parabolized Navier-Stokes methods combined with quasitime relaxation. This scheme is referred to as quasiparabolic although it applies equally well to hyperbolic supersonic inviscid flows. Second order windward differences are used in the marching coordinate and either explicit or linear block implicit time relaxation can be incorporated.

  11. User's manual: Subsonic/supersonic advanced panel pilot code

    NASA Technical Reports Server (NTRS)

    Moran, J.; Tinoco, E. N.; Johnson, F. T.

    1978-01-01

    Sufficient instructions for running the subsonic/supersonic advanced panel pilot code were developed. This software was developed as a vehicle for numerical experimentation and it should not be construed to represent a finished production program. The pilot code is based on a higher order panel method using linearly varying source and quadratically varying doublet distributions for computing both linearized supersonic and subsonic flow over arbitrary wings and bodies. This user's manual contains complete input and output descriptions. A brief description of the method is given as well as practical instructions for proper configurations modeling. Computed results are also included to demonstrate some of the capabilities of the pilot code. The computer program is written in FORTRAN IV for the SCOPE 3.4.4 operations system of the Ames CDC 7600 computer. The program uses overlay structure and thirteen disk files, and it requires approximately 132000 (Octal) central memory words.

  12. Coherent-state constellations and polar codes for thermal Gaussian channels

    NASA Astrophysics Data System (ADS)

    Lacerda, Felipe; Renes, Joseph M.; Scholz, Volkher B.

    2017-06-01

    Optical communication channels are ultimately quantum mechanical in nature, and we must therefore look beyond classical information theory to determine their communication capacity as well as to find efficient encoding and decoding schemes of the highest rates. Thermal channels, which arise from linear coupling of the field to a thermal environment, are of particular practical relevance; their classical capacity has been recently established, but their quantum capacity remains unknown. While the capacity sets the ultimate limit on reliable communication rates, it does not promise that such rates are achievable by practical means. Here we construct efficiently encodable codes for thermal channels which achieve the classical capacity and the so-called Gaussian coherent information for transmission of classical and quantum information, respectively. Our codes are based on combining polar codes with a discretization of the channel input into a finite "constellation" of coherent states. Encoding of classical information can be done using linear optics.

  13. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  14. Large deformation image classification using generalized locality-constrained linear coding.

    PubMed

    Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.

  15. On complexity of trellis structure of linear block codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1990-01-01

    The trellis structure of linear block codes (LBCs) is discussed. The state and branch complexities of a trellis diagram (TD) for a LBC is investigated. The TD with the minimum number of states is said to be minimal. The branch complexity of a minimal TD for a LBC is expressed in terms of the dimensions of specific subcodes of the given code. Then upper and lower bounds are derived on the number of states of a minimal TD for a LBC, and it is shown that a cyclic (or shortened cyclic) code is the worst in terms of the state complexity among the LBCs of the same length and dimension. Furthermore, it is shown that the structural complexity of a minimal TD for a LBC depends on the order of its bit positions. This fact suggests that an appropriate permutation of the bit positions of a code may result in an equivalent code with a much simpler minimal TD. Boolean polynomial representation of codewords of a LBC is also considered. This representation helps in study of the trellis structure of the code. Boolean polynomial representation of a code is applied to construct its minimal TD. Particularly, the construction of minimal trellises for Reed-Muller codes and the extended and permuted binary primitive BCH codes which contain Reed-Muller as subcodes is emphasized. Finally, the structural complexity of minimal trellises for the extended and permuted, and double-error-correcting BCH codes is analyzed and presented. It is shown that these codes have relatively simple trellis structure and hence can be decoded with the Viterbi decoding algorithm.

  16. Weighted bi-prediction for light field image coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2017-09-01

    Light field imaging based on a single-tier camera equipped with a microlens array - also known as integral, holoscopic, and plenoptic imaging - has currently risen up as a practical and prospective approach for future visual applications and services. However, successfully deploying actual light field imaging applications and services will require developing adequate coding solutions to efficiently handle the massive amount of data involved in these systems. In this context, self-similarity compensated prediction is a non-local spatial prediction scheme based on block matching that has been shown to achieve high efficiency for light field image coding based on the High Efficiency Video Coding (HEVC) standard. As previously shown by the authors, this is possible by simply averaging two predictor blocks that are jointly estimated from a causal search window in the current frame itself, referred to as self-similarity bi-prediction. However, theoretical analyses for motion compensated bi-prediction have suggested that it is still possible to achieve further rate-distortion performance improvements by adaptively estimating the weighting coefficients of the two predictor blocks. Therefore, this paper presents a comprehensive study of the rate-distortion performance for HEVC-based light field image coding when using different sets of weighting coefficients for self-similarity bi-prediction. Experimental results demonstrate that it is possible to extend the previous theoretical conclusions to light field image coding and show that the proposed adaptive weighting coefficient selection leads to up to 5 % of bit savings compared to the previous self-similarity bi-prediction scheme.

  17. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  18. Reactor Dosimetry Applications Using RAPTOR-M3G:. a New Parallel 3-D Radiation Transport Code

    NASA Astrophysics Data System (ADS)

    Longoni, Gianluca; Anderson, Stanwood L.

    2009-08-01

    The numerical solution of the Linearized Boltzmann Equation (LBE) via the Discrete Ordinates method (SN) requires extensive computational resources for large 3-D neutron and gamma transport applications due to the concurrent discretization of the angular, spatial, and energy domains. This paper will discuss the development RAPTOR-M3G (RApid Parallel Transport Of Radiation - Multiple 3D Geometries), a new 3-D parallel radiation transport code, and its application to the calculation of ex-vessel neutron dosimetry responses in the cavity of a commercial 2-loop Pressurized Water Reactor (PWR). RAPTOR-M3G is based domain decomposition algorithms, where the spatial and angular domains are allocated and processed on multi-processor computer architectures. As compared to traditional single-processor applications, this approach reduces the computational load as well as the memory requirement per processor, yielding an efficient solution methodology for large 3-D problems. Measured neutron dosimetry responses in the reactor cavity air gap will be compared to the RAPTOR-M3G predictions. This paper is organized as follows: Section 1 discusses the RAPTOR-M3G methodology; Section 2 describes the 2-loop PWR model and the numerical results obtained. Section 3 addresses the parallel performance of the code, and Section 4 concludes this paper with final remarks and future work.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Piro, Markus H.A.

    Thermochimica is a software library that determines a unique combination of phases and their compositions at thermochemical equilibrium. Thermochimica can be used for stand-alone calculations or it can be directly coupled to other codes. This release of the software does not have a graphical user interface (GUI) and it can be executed from the command line or from an Application Programming Interface (API). Also, it is not intended for thermodynamic model development or for constructing phase diagrams. The main purpose of the software is to be directly coupled with a multi-physics code to provide material properties and boundary conditions formore » various physical phenomena. Significant research efforts have been dedicated to enhance computational performance through advanced algorithm development, such as improved estimation techniques and non-linear solvers. Various useful parameters can be provided as output from Thermochimica, such as: determination of which phases are stable at equilibrium, the mass of solution species and phases at equilibrium, mole fractions of solution phase constituents, thermochemical activities (which are related to partial pressures for gaseous species), chemical potentials of solution species and phases, and integral Gibbs energy (referenced relative to standard state). The overall goal is to provide an open source computational tool to enhance the predictive capability of multi-physics codes without significantly impeding computational performance.« less

  20. Effects of target fragmentation on evaluation of LET spectra from space radiations: implications for space radiation protection studies

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Badavi, F. F.; Badhwar, G. D.

    1996-01-01

    We present calculations of linear energy transfer (LET) spectra in low earth orbit from galactic cosmic rays and trapped protons using the HZETRN/BRYNTRN computer code. The emphasis of our calculations is on the analysis of the effects of secondary nuclei produced through target fragmentation in the spacecraft shield or detectors. Recent improvements in the HZETRN/BRYNTRN radiation transport computer code are described. Calculations show that at large values of LET (> 100 keV/micrometer) the LET spectra seen in free space and low earth orbit (LEO) are dominated by target fragments and not the primary nuclei. Although the evaluation of microdosimetric spectra is not considered here, calculations of LET spectra support that the large lineal energy (y) events are dominated by the target fragments. Finally, we discuss the situation for interplanetary exposures to galactic cosmic rays and show that current radiation transport codes predict that in the region of high LET values the LET spectra at significant shield depths (> 10 g/cm2 of Al) is greatly modified by target fragments. These results suggest that studies of track structure and biological response of space radiation should place emphasis on short tracks of medium charge fragments produced in the human body by high energy protons and neutrons.

  1. TFaNS Tone Fan Noise Design/Prediction System. Volume 2; User's Manual; 1.4

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Eversman, Walter

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. CUP3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides information on code input and file structure essential for potential users of TFANS. This report is divided into three volumes: Volume 1. System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume 2. User's Manual, TFANS Vers. 1.4; Volume 3. Evaluation of System Codes.

  2. TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.

  3. Role of N-Methyl-D-Aspartate Receptors in Action-Based Predictive Coding Deficits in Schizophrenia.

    PubMed

    Kort, Naomi S; Ford, Judith M; Roach, Brian J; Gunduz-Bruce, Handan; Krystal, John H; Jaeger, Judith; Reinhart, Robert M G; Mathalon, Daniel H

    2017-03-15

    Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia. In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared. N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen's d = 1.14) and schizophrenia (Cohen's d = .85). Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia. Copyright © 2016 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. PNNL Technical Support to The Implementation of EMTA and EMTA-NLA Models in Autodesk® Moldflow® Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Wang, Jin

    2012-12-01

    Under the Predictive Engineering effort, PNNL developed linear and nonlinear property prediction models for long-fiber thermoplastics (LFTs). These models were implemented in PNNL’s EMTA and EMTA-NLA codes. While EMTA is a standalone software for the computation of the composites thermoelastic properties, EMTA-NLA presents a series of nonlinear models implemented in ABAQUS® via user subroutines for structural analyses. In all these models, it is assumed that the fibers are linear elastic while the matrix material can exhibit a linear or typical nonlinear behavior depending on the loading prescribed to the composite. The key idea is to model the constitutive behavior ofmore » the matrix material and then to use an Eshelby-Mori-Tanaka approach (EMTA) combined with numerical techniques for fiber length and orientation distributions to determine the behavior of the as-formed composite. The basic property prediction models of EMTA and EMTA-NLA have been subject for implementation in the Autodesk® Moldflow® software packages. These models are the elastic stiffness model accounting for fiber length and orientation distributions, the fiber/matrix interface debonding model, and the elastic-plastic models. The PNNL elastic-plastic models for LFTs describes the composite nonlinear stress-strain response up to failure by an elastic-plastic formulation associated with either a micromechanical criterion to predict failure or a continuum damage mechanics formulation coupling damage to plasticity. All the models account for fiber length and orientation distributions as well as fiber/matrix debonding that can occur at any stage of loading. In an effort to transfer the technologies developed under the Predictive Engineering project to the American automotive and plastics industries, PNNL has obtained the approval of the DOE Office of Vehicle Technologies to provide Autodesk, Inc. with the technical support for the implementation of the basic property prediction models of EMTA and EMTA-NLA in the Autodesk® Moldflow® packages. This report summarizes the recent results from Autodesk Simulation Moldlow Insight (ASMI) analyses using the EMTA models and EMTA-NLA/ABAQUS® analyses for further assessment of the EMTA-NLA models to support their implementation in Autodesk Moldflow Structural Alliance (AMSA). PNNL’s technical support to Autodesk, Inc. included (i) providing the theoretical property prediction models as described in published journal articles and reports, (ii) providing explanations of these models and computational procedure, (iii) providing the necessary LFT data for process simulations and property predictions, and (iv) performing ABAQUS/EMTA-NLA analyses to further assess and illustrate the models for selected LFT materials.« less

  5. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  6. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  7. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  8. Predictions of Speech Chimaera Intelligibility Using Auditory Nerve Mean-Rate and Spike-Timing Neural Cues.

    PubMed

    Wirtzfeld, Michael R; Ibrahim, Rasha A; Bruce, Ian C

    2017-10-01

    Perceptual studies of speech intelligibility have shown that slow variations of acoustic envelope (ENV) in a small set of frequency bands provides adequate information for good perceptual performance in quiet, whereas acoustic temporal fine-structure (TFS) cues play a supporting role in background noise. However, the implications for neural coding are prone to misinterpretation because the mean-rate neural representation can contain recovered ENV cues from cochlear filtering of TFS. We investigated ENV recovery and spike-time TFS coding using objective measures of simulated mean-rate and spike-timing neural representations of chimaeric speech, in which either the ENV or the TFS is replaced by another signal. We (a) evaluated the levels of mean-rate and spike-timing neural information for two categories of chimaeric speech, one retaining ENV cues and the other TFS; (b) examined the level of recovered ENV from cochlear filtering of TFS speech; (c) examined and quantified the contribution to recovered ENV from spike-timing cues using a lateral inhibition network (LIN); and (d) constructed linear regression models with objective measures of mean-rate and spike-timing neural cues and subjective phoneme perception scores from normal-hearing listeners. The mean-rate neural cues from the original ENV and recovered ENV partially accounted for perceptual score variability, with additional variability explained by the recovered ENV from the LIN-processed TFS speech. The best model predictions of chimaeric speech intelligibility were found when both the mean-rate and spike-timing neural cues were included, providing further evidence that spike-time coding of TFS cues is important for intelligibility when the speech envelope is degraded.

  9. A first principles study of the electronic structure, elastic and thermal properties of UB2

    NASA Astrophysics Data System (ADS)

    Jossou, Ericmoore; Malakkal, Linu; Szpunar, Barbara; Oladimeji, Dotun; Szpunar, Jerzy A.

    2017-07-01

    Uranium diboride (UB2) has been widely deployed for refractory use and is a proposed material for Accident Tolerant Fuel (ATF) due to its high thermal conductivity. However, the applicability of UB2 towards high temperature usage in a nuclear reactor requires the need to investigate the thermomechanical properties, and recent studies have failed in highlighting applicable properties. In this work, we present an in-depth theoretical outlook of the structural and thermophysical properties of UB2, including but not limited to elastic, electronic and thermal transport properties. These calculations were performed within the framework of Density Functional Theory (DFT) + U approach, using Quantum ESPRESSO (QE) code considering the addition of Coulomb correlations on the uranium atom. The phonon spectra and elastic constant analysis show the dynamic and mechanical stability of UB2 structure respectively. The electronic structure of UB2 was investigated using full potential linear augmented plane waves plus local orbitals method (FP-LAPW+lo) as implemented in WIEN2k code. The absence of a band gap in the total and partial density of states confirms the metallic nature while the valence electron density plot reveals the presence of covalent bond between adjacent B-B atoms. We predicted the lattice thermal conductivity (kL) by solving Boltzmann Transport Equation (BTE) using ShengBTE. The second order harmonic and third-order anharmonic interatomic force constants required as input to ShengBTE was calculated using the Density-functional perturbation theory (DFPT). However, we predicted the electronic thermal conductivity (kel) using Wiedemann-Franz law as implemented in Boltztrap code. We also show that the sound velocity along 'a' and 'c' axes exhibit high anisotropy, which accounts for the anisotropic thermal conductivity of UB2.

  10. Prediction of plasma-facing ICRH antenna behavior via a Finite-Element solution of coupled Integral Equations

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.

    2005-09-01

    The demand for a predictive tool to help designing ICRH antennas for fusion experiments has driven the development of codes like ICANT, RANT3D, and the early developments and further upgrades of TOPICA code. Currently, TOPICA handles the actual geometry of ICRH antennas (with their housing, etc.) as well as a realistic plasma model, including density and temperature profiles and FLR effects. Both goals have been attained by formally splitting the problem into two parts: the vacuum region around the antenna, and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow writing a set of coupled integral equations for the unknown equivalent (current) sources; finite elements are used on a triangular-cell mesh and a linear system is obtained on application of the weighted-residual solution scheme. In the vacuum region calculations are done in the spatial domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus allowing a description of the plasma by a surface impedance matrix. Thanks to this approach, any plasma model can be used in principle, and at present Brambilla's FELICE code has been employed. The natural outputs of TOPICA are the induced currents on the conductors and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. This paper is precisely devoted to the description of TOPICA, whereas examples of results for real-life antennas are reported in a companion paper [1] in this proceedings.

  11. Prediction of plasma-facing ICRH antenna behavior via a Finite-Element solution of coupled Integral Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lancellotti, V.; Milanesio, D.; Maggiora, R.

    2005-09-26

    The demand for a predictive tool to help designing ICRH antennas for fusion experiments has driven the development of codes like ICANT, RANT3D, and the early developments and further upgrades of TOPICA code. Currently, TOPICA handles the actual geometry of ICRH antennas (with their housing, etc.) as well as a realistic plasma model, including density and temperature profiles and FLR effects. Both goals have been attained by formally splitting the problem into two parts: the vacuum region around the antenna, and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow writing a set of coupled integralmore » equations for the unknown equivalent (current) sources; finite elements are used on a triangular-cell mesh and a linear system is obtained on application of the weighted-residual solution scheme. In the vacuum region calculations are done in the spatial domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus allowing a description of the plasma by a surface impedance matrix. Thanks to this approach, any plasma model can be used in principle, and at present Brambilla's FELICE code has been employed. The natural outputs of TOPICA are the induced currents on the conductors and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. This paper is precisely devoted to the description of TOPICA, whereas examples of results for real-life antennas are reported in a companion paper in this proceedings.« less

  12. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  13. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    PubMed

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  14. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models

    PubMed Central

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2016-01-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437

  15. Assessing the role of the Kelvin-Helmholtz instability at the QCD cosmological transition

    NASA Astrophysics Data System (ADS)

    Mourão Roque, V. R. C.; Lugones, G.

    2018-03-01

    We performed numerical simulations with the PLUTO code in order to analyze the non-linear behavior of the Kelvin-Helmholtz instability in non-magnetized relativistic fluids. The relevance of the instability at the cosmological QCD phase transition was explored using an equation of state based on lattice QCD results with the addition of leptons. The results of the simulations were compared with the theoretical predictions of the linearized theory. For small Mach numbers up to Ms ~ 0.1 we find that both results are in good agreement. However, for higher Mach numbers, non-linear effects are significant. In particular, many initial conditions that look stable according to the linear analysis are shown to be unstable according to the full calculation. Since according to lattice calculations the cosmological QCD transition is a smooth crossover, violent fluid motions are not expected. Thus, in order to assess the role of the Kelvin-Helmholtz instability at the QCD epoch, we focus on simulations with low shear velocity and use monochromatic as well as random perturbations to trigger the instability. We find that the Kelvin-Helmholtz instability can strongly amplify turbulence in the primordial plasma and as a consequence it may increase the amount of primordial gravitational radiation. Such turbulence may be relevant for the evolution of the Universe at later stages and may have an impact in the stochastic gravitational wave background.

  16. Nonlinear study of the parallel velocity/tearing instability using an implicit, nonlinear resistive MHD solver

    NASA Astrophysics Data System (ADS)

    Chacon, L.; Finn, J. M.; Knoll, D. A.

    2000-10-01

    Recently, a new parallel velocity instability has been found.(J. M. Finn, Phys. Plasmas), 2, 12 (1995) This mode is a tearing mode driven unstable by curvature effects and sound wave coupling in the presence of parallel velocity shear. Under such conditions, linear theory predicts that tearing instabilities will grow even in situations in which the classical tearing mode is stable. This could then be a viable seed mechanism for the neoclassical tearing mode, and hence a non-linear study is of interest. Here, the linear and non-linear stages of this instability are explored using a fully implicit, fully nonlinear 2D reduced resistive MHD code,(L. Chacon et al), ``Implicit, Jacobian-free Newton-Krylov 2D reduced resistive MHD nonlinear solver,'' submitted to J. Comput. Phys. (2000) including viscosity and particle transport effects. The nonlinear implicit time integration is performed using the Newton-Raphson iterative algorithm. Krylov iterative techniques are employed for the required algebraic matrix inversions, implemented Jacobian-free (i.e., without ever forming and storing the Jacobian matrix), and preconditioned with a ``physics-based'' preconditioner. Nonlinear results indicate that, for large total plasma beta and large parallel velocity shear, the instability results in the generation of large poloidal shear flows and large magnetic islands even in regimes when the classical tearing mode is absolutely stable. For small viscosity, the time asymptotic state can be turbulent.

  17. PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions

    NASA Technical Reports Server (NTRS)

    Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark

    1990-01-01

    Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.

  18. SIMD Optimization of Linear Expressions for Programmable Graphics Hardware

    PubMed Central

    Bajaj, Chandrajit; Ihm, Insung; Min, Jungki; Oh, Jinsang

    2009-01-01

    The increased programmability of graphics hardware allows efficient graphical processing unit (GPU) implementations of a wide range of general computations on commodity PCs. An important factor in such implementations is how to fully exploit the SIMD computing capacities offered by modern graphics processors. Linear expressions in the form of ȳ = Ax̄ + b̄, where A is a matrix, and x̄, ȳ and b̄ are vectors, constitute one of the most basic operations in many scientific computations. In this paper, we propose a SIMD code optimization technique that enables efficient shader codes to be generated for evaluating linear expressions. It is shown that performance can be improved considerably by efficiently packing arithmetic operations into four-wide SIMD instructions through reordering of the operations in linear expressions. We demonstrate that the presented technique can be used effectively for programming both vertex and pixel shaders for a variety of mathematical applications, including integrating differential equations and solving a sparse linear system of equations using iterative methods. PMID:19946569

  19. Computer programs to predict induced effects of jets exhausting into a crossflow

    NASA Technical Reports Server (NTRS)

    Perkins, S. C., Jr.; Mendenhall, M. R.

    1984-01-01

    A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.

  20. Implicit Coupling Approach for Simulation of Charring Carbon Ablators

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Gokcen, Tahir

    2013-01-01

    This study demonstrates that coupling of a material thermal response code and a flow solver with nonequilibrium gas/surface interaction for simulation of charring carbon ablators can be performed using an implicit approach. The material thermal response code used in this study is the three-dimensional version of Fully Implicit Ablation and Thermal response program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation method. Coupling between the material response and flow codes is performed by solving the surface mass balance in flow solver and the surface energy balance in material response code. Thus, the material surface recession is predicted in flow code, and the surface temperature and pyrolysis gas injection rate are computed in material response code. It is demonstrated that the time-lagged explicit approach is sufficient for simulations at low surface heating conditions, in which the surface ablation rate is not a strong function of the surface temperature. At elevated surface heating conditions, the implicit approach has to be taken, because the carbon ablation rate becomes a stiff function of the surface temperature, and thus the explicit approach appears to be inappropriate resulting in severe numerical oscillations of predicted surface temperature. Implicit coupling for simulation of arc-jet models is performed, and the predictions are compared with measured data. Implicit coupling for trajectory based simulation of Stardust fore-body heat shield is also conducted. The predicted stagnation point total recession is compared with that predicted using the chemical equilibrium surface assumption

  1. Analysis and recognition of 5′ UTR intron splice sites in human pre-mRNA

    PubMed Central

    Eden, E.; Brunak, S.

    2004-01-01

    Prediction of splice sites in non-coding regions of genes is one of the most challenging aspects of gene structure recognition. We perform a rigorous analysis of such splice sites embedded in human 5′ untranslated regions (UTRs), and investigate correlations between this class of splice sites and other features found in the adjacent exons and introns. By restricting the training of neural network algorithms to ‘pure’ UTRs (not extending partially into protein coding regions), we for the first time investigate the predictive power of the splicing signal proper, in contrast to conventional splice site prediction, which typically relies on the change in sequence at the transition from protein coding to non-coding. By doing so, the algorithms were able to pick up subtler splicing signals that were otherwise masked by ‘coding’ noise, thus enhancing significantly the prediction of 5′ UTR splice sites. For example, the non-coding splice site predicting networks pick up compositional and positional bias in the 3′ ends of non-coding exons and 5′ non-coding intron ends, where cytosine and guanine are over-represented. This compositional bias at the true UTR donor sites is also visible in the synaptic weights of the neural networks trained to identify UTR donor sites. Conventional splice site prediction methods perform poorly in UTRs because the reading frame pattern is absent. The NetUTR method presented here performs 2–3-fold better compared with NetGene2 and GenScan in 5′ UTRs. We also tested the 5′ UTR trained method on protein coding regions, and discovered, surprisingly, that it works quite well (although it cannot compete with NetGene2). This indicates that the local splicing pattern in UTRs and coding regions is largely the same. The NetUTR method is made publicly available at www.cbs.dtu.dk/services/NetUTR. PMID:14960723

  2. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  3. Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il

    2014-08-15

    Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.

  4. Terahertz wave manipulation based on multi-bit coding artificial electromagnetic surfaces

    NASA Astrophysics Data System (ADS)

    Li, Jiu-Sheng; Zhao, Ze-Jiang; Yao, Jian-Quan

    2018-05-01

    A polarization insensitive multi-bit coding artificial electromagnetic surface is proposed for terahertz wave manipulation. The coding artificial electromagnetic surfaces composed of four-arrow-shaped particles with certain coding sequences can generate multi-bit coding in the terahertz frequencies and manipulate the reflected terahertz waves to the numerous directions by using of different coding distributions. Furthermore, we demonstrate that our coding artificial electromagnetic surfaces have strong abilities to reduce the radar cross section with polarization insensitive for TE and TM incident terahertz waves as well as linear-polarized and circular-polarized terahertz waves. This work offers an effectively strategy to realize more powerful manipulation of terahertz wave.

  5. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    NASA Astrophysics Data System (ADS)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  6. Numerical simulation of experiments in the Giant Planet Facility

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Davy, W. C.

    1979-01-01

    Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.

  7. Minimizing embedding impact in steganography using trellis-coded quantization

    NASA Astrophysics Data System (ADS)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  8. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    PubMed

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE

  9. LDRD final report on massively-parallel linear programming : the parPCx system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar

    2005-02-01

    This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runsmore » on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We conclude with directions for long-term future algorithmic research and for near-term development that could improve the performance of parPCx.« less

  10. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  11. 3-D inelastic analysis methods for hot section components (base program). [turbine blades, turbine vanes, and combustor liners

    NASA Technical Reports Server (NTRS)

    Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.

    1984-01-01

    A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.

  12. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  13. A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma

    NASA Astrophysics Data System (ADS)

    Kawazura, Y.; Barnes, M.

    2018-05-01

    This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.

  14. Conceptual-driven classification for coding advise in health insurance reimbursement.

    PubMed

    Li, Sheng-Tun; Chen, Chih-Chuan; Huang, Fernando

    2011-01-01

    With the non-stop increases in medical treatment fees, the economic survival of a hospital in Taiwan relies on the reimbursements received from the Bureau of National Health Insurance, which in turn depend on the accuracy and completeness of the content of the discharge summaries as well as the correctness of their International Classification of Diseases (ICD) codes. The purpose of this research is to enforce the entire disease classification framework by supporting disease classification specialists in the coding process. This study developed an ICD code advisory system (ICD-AS) that performed knowledge discovery from discharge summaries and suggested ICD codes. Natural language processing and information retrieval techniques based on Zipf's Law were applied to process the content of discharge summaries, and fuzzy formal concept analysis was used to analyze and represent the relationships between the medical terms identified by MeSH. In addition, a certainty factor used as reference during the coding process was calculated to account for uncertainty and strengthen the credibility of the outcome. Two sets of 360 and 2579 textual discharge summaries of patients suffering from cerebrovascular disease was processed to build up ICD-AS and to evaluate the prediction performance. A number of experiments were conducted to investigate the impact of system parameters on accuracy and compare the proposed model to traditional classification techniques including linear-kernel support vector machines. The comparison results showed that the proposed system achieves the better overall performance in terms of several measures. In addition, some useful implication rules were obtained, which improve comprehension of the field of cerebrovascular disease and give insights to the relationships between relevant medical terms. Our system contributes valuable guidance to disease classification specialists in the process of coding discharge summaries, which consequently brings benefits in aspects of patient, hospital, and healthcare system. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. On Asymptotically Good Ramp Secret Sharing Schemes

    NASA Astrophysics Data System (ADS)

    Geil, Olav; Martin, Stefano; Martínez-Peñas, Umberto; Matsumoto, Ryutaroh; Ruano, Diego

    Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.

  16. Superdense Coding over Optical Fiber Links with Complete Bell-State Measurements

    DOE PAGES

    Williams, Brian P.; Sadlier, Ronald J.; Humble, Travis S.

    2017-02-01

    Adopting quantum communication to modern networking requires transmitting quantum information through a fiber-based infrastructure. In this paper, we report the first demonstration of superdense coding over optical fiber links, taking advantage of a complete Bell-state measurement enabled by time-polarization hyperentanglement, linear optics, and common single-photon detectors. Finally, we demonstrate the highest single-qubit channel capacity to date utilizing linear optics, 1.665 ± 0.018, and we provide a full experimental implementation of a hybrid, quantum-classical communication protocol for image transfer.

  17. TAS: A Transonic Aircraft/Store flow field prediction code

    NASA Technical Reports Server (NTRS)

    Thompson, D. S.

    1983-01-01

    A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.

  18. Modification of codes NUALGAM and BREMRAD, Volume 1

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Huang, R.; Firstenberg, H.

    1971-01-01

    The NUGAM2 code predicts forward and backward angular energy differential and integrated distributions for gamma photons and fluorescent radiation emerging from finite laminar transport media. It determines buildup and albedo data for scientific research and engineering purposes; it also predicts the emission characteristics of finite radioisotope sources. The results are shown to be in very good agreement with available published data. The code predicts data for many situations in which no published data is available in the energy range up to 5 MeV. The NUGAM3 code predicts the pulse height response of inorganic (NaI and CsI) scintillation detectors to gamma photons. Because it allows the scintillator to be clad and mounted on a photomultiplier as in the experimental or industrial application, it is a more practical and thus useful code than others previously reported. Results are in excellent agreement with published Monte Carlo and experimental data in the energy range up to 4.5 MeV.

  19. Modeling time-dependent corrosion fatigue crack propagation in 7000 series aluminum alloys

    NASA Technical Reports Server (NTRS)

    Mason, Mark E.; Gangloff, Richard P.

    1994-01-01

    Stress corrosion cracking and corrosion fatigue experiments were conducted with the susceptible S-L orientation of AA7075-T651, immersed in acidified and inhibited NaCl solution, to provide a basis for incorporating environmental effects into fatigue crack propagation life prediction codes such as NASA FLAGRO. This environment enhances da/dN by five to ten-fold compared to fatigue in moist air. Time-based crack growth rates from quasi-static load experiments are an order of magnitude too small for accurate linear superposition prediction of da/dN for loading frequencies above 0.001 Hz. Alternate methods of establishing da/dt, based on rising-load or ripple-load-enhanced crack tip strain rate, do not increase da/dt and do not improve linear superposition. Corrosion fatigue is characterized by two regimes of frequency dependence; da/dN is proportional to f(exp -1) below 0.001 Hz and to F(exp 0) to F(exp -0.1) for higher frequencies. Da/dN increases mildly both with increasing hold-time at K(sub max) and with increasing rise-time for a range of loading waveforms. The mild time-dependence is due to cycle-time-dependent corrosion fatigue growth. This behavior is identical for S-L nd L-T crack orientations. The frequency response of environmental fatigue in several 7000 series alloys is variable and depends on undefined compositional or microstructural variables. Speculative explanations are based on the effect of Mg on occluded crack chemistry and embritting hydrogen uptake, or on variable hydrogen diffusion in the crack tip process zone. Cracking in the 7075/NaCl system is adequately described for life prediction by linear superposition for prolonged load-cycle periods, and by a time-dependent upper bound relationship between da/dN and delta K for moderate loading times.

  20. Dream to Predict? REM Dreaming as Prospective Coding

    PubMed Central

    Llewellyn, Sue

    2016-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies—through expectations derived from discerning patterns in associated past experiences—would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus—without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning—within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate—another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a predictive code. A particular dream illustrates. PMID:26779078

  1. Advanced propeller noise prediction in the time domain

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Spence, P. L.

    1992-01-01

    The time domain code ASSPIN gives acousticians a powerful technique of advanced propeller noise prediction. Except for nonlinear effects, the code uses exact solutions of the Ffowcs Williams-Hawkings equation with exact blade geometry and kinematics. By including nonaxial inflow, periodic loading noise, and adaptive time steps to accelerate computer execution, the development of this code becomes complete.

  2. Perceptual scale expansion: an efficient angular coding strategy for locomotor space.

    PubMed

    Durgin, Frank H; Li, Zhi

    2011-08-01

    Whereas most sensory information is coded on a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for the angular variables important to precise motor control. In four experiments, we show that the perceived declination of gaze, like the perceived orientation of surfaces, is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and nonverbal measures (Experiments 1 and 2), as well as in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching, while allowing for accurate spatial action to be understood as the result of calibration.

  3. Perceptual Scale Expansion: An Efficient Angular Coding Strategy for Locomotor Space

    PubMed Central

    Durgin, Frank H.; Li, Zhi

    2011-01-01

    Whereas most sensory information is coded in a logarithmic scale, linear expansion of a limited range may provide a more efficient coding for angular variables important to precise motor control. In four experiments it is shown that the perceived declination of gaze, like the perceived orientation of surfaces is coded on a distorted scale. The distortion seems to arise from a nearly linear expansion of the angular range close to horizontal/straight ahead and is evident in explicit verbal and non-verbal measures (Experiments 1 and 2) and in implicit measures of perceived gaze direction (Experiment 4). The theory is advanced that this scale expansion (by a factor of about 1.5) may serve a functional goal of coding efficiency for angular perceptual variables. The scale expansion of perceived gaze declination is accompanied by a corresponding expansion of perceived optical slants in the same range (Experiments 3 and 4). These dual distortions can account for the explicit misperception of distance typically obtained by direct report and exocentric matching while allowing accurate spatial action to be understood as the result of calibration. PMID:21594732

  4. Noise Analysis of Spatial Phase coding in analog Acoustooptic Processors

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Optical beams can carry information in their amplitude and phase; however, optical analog numerical calculators such as an optical matrix processor use incoherent light to achieve linear operation. Thus, the phase information is lost and only the magnitude can be used. This limits such processors to the representation of positive real numbers. Many systems have been devised to overcome this deficit through the use of digital number representations, but they all operate at a greatly reduced efficiency in contrast to analog systems. The most widely accepted method to achieve sign coding in analog optical systems has been the use of an offset for the zero level. Unfortunately, this results in increased noise sensitivity for small numbers. In this paper, we examine the use of spatially coherent sign coding in acoustooptical processors, a method first developed for digital calculations by D. V. Tigin. This coding technique uses spatial coherence for the representation of signed numbers, while temporal incoherence allows for linear analog processing of the optical information. We show how spatial phase coding reduces noise sensitivity for signed analog calculations.

  5. Optimized nonorthogonal transforms for image compression.

    PubMed

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  6. Dynamics of Magnetopause Reconnection in Response to Variable Solar Wind Conditions

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Pitout, F.

    2017-12-01

    Quantifying the dynamics of magnetopause reconnection in response to variable solar wind driving is essential to advancing our predictive understanding of the interaction of the solar wind/IMF with the magnetosphere. To this end we have carried out numerical studies that combine global magnetohydrodynamic (MHD) and Large-Scale Kinetic (LSK) simulations to identify and understand the effects of solar wind/IMF variations. The use of the low dissipation, high resolution UCLA MHD code incorporating a non-linear local resistivity allows the representation of the global configuration of the dayside magnetosphere while the use of LSK ion test particle codes with distributed particle detectors allows us to compare the simulation results with spacecraft observations such as ion dispersion signatures observed by the Cluster spacecraft. We present the results of simulations that focus on the impacts of relatively simple solar wind discontinuities on the magnetopause and examine how the recent history of the interaction of the magnetospheric boundary with solar wind discontinuities can modify the dynamics of magnetopause reconnection in response to the solar wind input.

  7. Development of a Stirling System Dynamic Model With Enhanced Thermodynamics

    NASA Technical Reports Server (NTRS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2005-01-01

    The Stirling Convertor System Dynamic Model developed at NASA Glenn Research Center is a software model developed from first principles that includes the mechanical and mounting dynamics, the thermodynamics, the linear alternator, and the controller of a free-piston Stirling power convertor, along with the end user load. As such it represents the first detailed modeling tool for fully integrated Stirling convertor-based power systems. The thermodynamics of the model were originally a form of the isothermal Stirling cycle. In some situations it may be desirable to improve the accuracy of the Stirling cycle portion of the model. An option under consideration is to enhance the SDM thermodynamics by coupling the model with Gedeon Associates Sage simulation code. The result will be a model that gives a more accurate prediction of the performance and dynamics of the free-piston Stirling convertor. A method of integrating the Sage simulation code with the System Dynamic Model is described. Results of SDM and Sage simulation are compared to test data. Model parameter estimation and model validation are discussed.

  8. Nmrglue: an open source Python package for the analysis of multidimensional NMR data.

    PubMed

    Helmus, Jonathan J; Jaroniec, Christopher P

    2013-04-01

    Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license.

  9. AEROELASTIC SIMULATION TOOL FOR INFLATABLE BALLUTE AEROCAPTURE

    NASA Technical Reports Server (NTRS)

    Liever, P. A.; Sheta, E. F.; Habchi, S. D.

    2006-01-01

    A multidisciplinary analysis tool is under development for predicting the impact of aeroelastic effects on the functionality of inflatable ballute aeroassist vehicles in both the continuum and rarefied flow regimes. High-fidelity modules for continuum and rarefied aerodynamics, structural dynamics, heat transfer, and computational grid deformation are coupled in an integrated multi-physics, multi-disciplinary computing environment. This flexible and extensible approach allows the integration of state-of-the-art, stand-alone NASA and industry leading continuum and rarefied flow solvers and structural analysis codes into a computing environment in which the modules can run concurrently with synchronized data transfer. Coupled fluid-structure continuum flow demonstrations were conducted on a clamped ballute configuration. The feasibility of implementing a DSMC flow solver in the simulation framework was demonstrated, and loosely coupled rarefied flow aeroelastic demonstrations were performed. A NASA and industry technology survey identified CFD, DSMC and structural analysis codes capable of modeling non-linear shape and material response of thin-film inflated aeroshells. The simulation technology will find direct and immediate applications with NASA and industry in ongoing aerocapture technology development programs.

  10. Nmrglue: An Open Source Python Package for the Analysis of Multidimensional NMR Data

    PubMed Central

    Helmus, Jonathan J.; Jaroniec, Christopher P.

    2013-01-01

    Nmrglue, an open source Python package for working with multidimensional NMR data, is described. When used in combination with other Python scientific libraries, nmrglue provides a highly flexible and robust environment for spectral processing, analysis and visualization and includes a number of common utilities such as linear prediction, peak picking and lineshape fitting. The package also enables existing NMR software programs to be readily tied together, currently facilitating the reading, writing and conversion of data stored in Bruker, Agilent/Varian, NMRPipe, Sparky, SIMPSON, and Rowland NMR Toolkit file formats. In addition to standard applications, the versatility offered by nmrglue makes the package particularly suitable for tasks that include manipulating raw spectrometer data files, automated quantitative analysis of multidimensional NMR spectra with irregular lineshapes such as those frequently encountered in the context of biomacromolecular solid-state NMR, and rapid implementation and development of unconventional data processing methods such as covariance NMR and other non-Fourier approaches. Detailed documentation, install files and source code for nmrglue are freely available at http://nmrglue.com. The source code can be redistributed and modified under the New BSD license. PMID:23456039

  11. Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.

    1982-04-01

    This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.

  12. Generating Health Estimates by Zip Code: A Semiparametric Small Area Estimation Approach Using the California Health Interview Survey.

    PubMed

    Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian

    2015-12-01

    We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.

  13. Resistive Wall Modes Identification and Control in RFX-mod low qedge tokamak discharges

    NASA Astrophysics Data System (ADS)

    Baruzzo, Matteo; Bolzonella, Tommaso; Cavazzana, Roberto; Marchiori, Giuseppe; Marrelli, Lionello; Martin, Piero; Paccagnella, Roberto; Piovesan, Paolo; Piron, Lidia; Soppelsa, Anton; Zanca, Paolo; in, Yongkyoon; Liu, Yueqiang; Okabayashi, Michio; Takechi, Manabu; Villone, Fabio

    2011-10-01

    In this work the MHD stability of RFX mode tokamak discharges with qedge < 3 will be studied. The target plasma scenario is characterized by a plasma current 100kA

  14. MATLAB Stability and Control Toolbox Trim and Static Stability Module

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis

    2012-01-01

    MATLAB Stability and Control Toolbox (MASCOT) utilizes geometric, aerodynamic, and inertial inputs to calculate air vehicle stability in a variety of critical flight conditions. The code is based on fundamental, non-linear equations of motion and is able to translate results into a qualitative, graphical scale useful to the non-expert. MASCOT was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental nonlinear equations of motion, MASCOT then calculates vehicle trim and static stability data for the desired flight condition(s). Available flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind, and sideslip, plus three take-off rotation conditions. Results are displayed through a unique graphical interface developed to provide the non-stability and control expert conceptual design engineer a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. If desired, the user can also examine the detailed, quantitative results.

  15. Development of a Stirling System Dynamic Model with Enhanced Thermodynamics

    NASA Astrophysics Data System (ADS)

    Regan, Timothy F.; Lewandowski, Edward J.

    2005-02-01

    The Stirling Convertor System Dynamic Model developed at NASA Glenn Research Center is a software model developed from first principles that includes the mechanical and mounting dynamics, the thermodynamics, the linear alternator, and the controller of a free-piston Stirling power convertor, along with the end user load. As such it represents the first detailed modeling tool for fully integrated Stirling convertor-based power systems. The thermodynamics of the model were originally a form of the isothermal Stirling cycle. In some situations it may be desirable to improve the accuracy of the Stirling cycle portion of the model. An option under consideration is to enhance the SDM thermodynamics by coupling the model with Gedeon Associates' Sage simulation code. The result will be a model that gives a more accurate prediction of the performance and dynamics of the free-piston Stirling convertor. A method of integrating the Sage simulation code with the System Dynamic Model is described. Results of SDM and Sage simulation are compared to test data. Model parameter estimation and model validation are discussed.

  16. Study of coherent synchrotron radiation effects by means of a new simulation code based on the non-linear extension of the operator splitting method

    NASA Astrophysics Data System (ADS)

    Dattoli, G.; Migliorati, M.; Schiavi, A.

    2007-05-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.

  17. The Matrix Pencil and its Applications to Speech Processing

    DTIC Science & Technology

    2007-03-01

    Elementary Linear Algebra ” 8th edition, pp. 278, 2000 John Wiley & Sons, Inc., New York [37] Wai C. Chu, “Speech Coding Algorithms”, New Jeresy: John...Ben; Daniel, James W.; “Applied Linear Algebra ”, pp. 342-345, 1988 Prentice Hall, Englewood Cliffs, NJ [35] Haykin, Simon “Applied Linear Adaptive...ABSTRACT Matrix Pencils facilitate the study of differential equations resulting from oscillating systems. Certain problems in linear ordinary

  18. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?

    PubMed

    Heilbron, Micha; Chait, Maria

    2017-08-04

    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Modeling positional effects of regulatory sequences with spline transformations increases prediction accuracy of deep neural networks

    PubMed Central

    Avsec, Žiga; Cheng, Jun; Gagneur, Julien

    2018-01-01

    Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928

  20. Comparative Study on Code-based Linear Evaluation of an Existing RC Building Damaged during 1998 Adana-Ceyhan Earthquake

    NASA Astrophysics Data System (ADS)

    Toprak, A. Emre; Gülay, F. Gülten; Ruge, Peter

    2008-07-01

    Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher than the requirements of the Turkish Earthquake Code while the selected ground conditions represent the same characteristics. The main reason is that the ordinate of the horizontal elastic response spectrum for Eurocode 8 is increased by the soil factor. In TEC'07 force-based linear assessment, the seismic demands at cross-sections are to be checked with residual moment capacities; however, the chord rotations of primary ductile elements must be checked for Eurocode safety verifications. On the other hand, the demand curvatures from linear methods of analysis of Eurocode 8 together with TEC'07 are almost similar.

  1. Collisional dependence of Alfvén mode saturation in tokamaks

    DOE PAGES

    Zhou, Muni; White, Roscoe

    2016-10-26

    Saturation of Alfvén modes driven unstable by a distribution of high energy particles as a function of collisionality is investigated with a guiding center code, using numerical eigenfunctions produced by linear theory and numerical high energy particle distributions. The most important resonance is found and it is shown that when the resonance domain is bounded, not allowing particles to collisionlessly escape, the saturation amplitude is given by the balance of the resonance mixing time with the time for nearby particles to collisionally diffuse across the resonance width. Finally, saturation amplitudes are in agreement with theoretical predictions as long as themore » mode amplitude is not so large that it produces stochastic loss from the resonance domain.« less

  2. Collisional dependence of Alfvén mode saturation in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Muni; White, Roscoe

    Saturation of Alfvén modes driven unstable by a distribution of high energy particles as a function of collisionality is investigated with a guiding center code, using numerical eigenfunctions produced by linear theory and numerical high energy particle distributions. The most important resonance is found and it is shown that when the resonance domain is bounded, not allowing particles to collisionlessly escape, the saturation amplitude is given by the balance of the resonance mixing time with the time for nearby particles to collisionally diffuse across the resonance width. Finally, saturation amplitudes are in agreement with theoretical predictions as long as themore » mode amplitude is not so large that it produces stochastic loss from the resonance domain.« less

  3. Noise normalization and windowing functions for VALIDAR in wind parameter estimation

    NASA Astrophysics Data System (ADS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Li, Zhiwen

    2006-05-01

    The wind parameter estimates from a state-of-the-art 2-μm coherent lidar system located at NASA Langley, Virginia, named VALIDAR (validation lidar), were compared after normalizing the noise by its estimated power spectra via the periodogram and the linear predictive coding (LPC) scheme. The power spectra and the Doppler shift estimates were the main parameter estimates for comparison. Different types of windowing functions were implemented in VALIDAR data processing algorithm and their impact on the wind parameter estimates was observed. Time and frequency independent windowing functions such as Rectangular, Hanning, and Kaiser-Bessel and time and frequency dependent apodized windowing function were compared. The briefing of current nonlinear algorithm development for Doppler shift correction subsequently follows.

  4. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.

  5. Navier-Stokes and Comprehensive Analysis Performance Predictions of the NREL Phase VI Experiment

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Burklund, Michael D.; Johnson, Wayne

    2003-01-01

    A vortex lattice code, CAMRAD II, and a Reynolds-Averaged Navier-Stoke code, OVERFLOW-D2, were used to predict the aerodynamic performance of a two-bladed horizontal axis wind turbine. All computations were compared with experimental data that was collected at the NASA Ames Research Center 80- by 120-Foot Wind Tunnel. Computations were performed for both axial as well as yawed operating conditions. Various stall delay models and dynamics stall models were used by the CAMRAD II code. Comparisons between the experimental data and computed aerodynamic loads show that the OVERFLOW-D2 code can accurately predict the power and spanwise loading of a wind turbine rotor.

  6. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  7. Dependence of neoclassical toroidal viscosity on the poloidal spectrum of applied nonaxisymmetric fields

    DOE PAGES

    Logan, Nikolas C.; Park, Jong -Kyu; Paz-Soldan, Carloa; ...

    2016-02-05

    This paper presents a single mode model that accurately predicts the coupling of applied nonaxisymmetric fields to the plasma response that induces neoclassical toroidal viscosity (NTV) torque in DIII-D H-mode plasmas. The torque is measured and modeled to have a sinusoidal dependence on the relative phase of multiple nonaxisymmetric field sources, including a minimum in which large amounts of nonaxisymmetric drive is decoupled from the NTV torque. This corresponds to the coupling and decoupling of the applied field to a NTV-driving mode spectrum. Modeling using the perturbed equilibrium nonambipolar transport (PENT) code confirms an effective single mode coupling between themore » applied field and the resultant torque, despite its inherent nonlinearity. Lastly, the coupling to the NTV mode is shown to have a similar dependence on the relative phasing as that of the IPEC dominant mode, providing a physical basis for the efficacy of this linear metric in predicting error field correction optima in NTV dominated regimes.« less

  8. Dependence of neoclassical toroidal viscosity on the poloidal spectrum of applied nonaxisymmetric fields

    NASA Astrophysics Data System (ADS)

    Logan, N. C.; Park, J.-K.; Paz-Soldan, C.; Lanctot, M. J.; Smith, S. P.; Burrell, K. H.

    2016-03-01

    This paper presents a single mode model that accurately predicts the coupling of applied nonaxisymmetric fields to the plasma response that induces neoclassical toroidal viscosity (NTV) torque in DIII-D H-mode plasmas. The torque is measured and modeled to have a sinusoidal dependence on the relative phase of multiple nonaxisymmetric field sources, including a minimum in which large amounts of nonaxisymmetric drive is decoupled from the NTV torque. This corresponds to the coupling and decoupling of the applied field to a NTV-driving mode spectrum. Modeling using the perturbed equilibrium nonambipolar transport (PENT) code confirms an effective single mode coupling between the applied field and the resultant torque, despite its inherent nonlinearity. The coupling to the NTV mode is shown to have a similar dependence on the relative phasing as that of the IPEC dominant mode, providing a physical basis for the efficacy of this linear metric in predicting error field correction optima in NTV dominated regimes.

  9. Prediction of another semimetallic silicene allotrope with Dirac fermions

    NASA Astrophysics Data System (ADS)

    Wu, Haiping; Qian, Yan; Du, Zhengwei; Zhu, Renzhu; Kan, Erjun; Deng, Kaiming

    2017-11-01

    Materials with Dirac point are so amazing since the charge carriers are massless and have an effective speed of light. However, among the predicted two-dimensional silicon allotropes with Dirac point, no one has been directly proved by experiment. This fact motivates us to search for other two-dimensional silicon allotropes. As a result, another stable single atomic layer thin silicon allotrope is found with the help of CALYPSO code in this work. This silicene allotrope is composed of eight-membered rings linked by Si-Si bonds with buckling formation. The electronic calculation reveals that it behaves as a nodal line semimetal with the linear energy dispersion relation near the Fermi surface. Notably, the ab initio molecular dynamics simulations display that the original atomic configuration can be remained even at an extremely high temperature of 1000 K. Additionally, hydrogenation could induce a semimetal-semiconductor transition in this silicene allotrope. We hope this work can expand the family of single atomic layer thin silicon allotropes with special applications.

  10. Blade Heat Transfer Measurements and Prediction in a Transonic Turbine Cascade

    NASA Technical Reports Server (NTRS)

    Giel, P. W.; VanFossen, G. J.; Boyle, R. J.; Thurman, D. R.; Civinskas, K. C.

    1999-01-01

    Detailed heat transfer measurements and predictions are given for a turbine rotor with 136 deg of turning and an axial chord of 12.7 cm. Data were obtained for inlet Reynolds numbers of 0.5 and 1.0 x 10(exp 6), for isentropic exit Mach numbers of 1.0 and 1.3, and for inlet turbulence intensities of 0.25% and 7.0%. Measurements were made in a linear cascade having a highly three-dimensional flow field resulting from thick inlet boundary layers. The purpose of the work is to provide benchmark quality data for three-dimensional CFD code and model verification. Data were obtained by a steady-state technique using a heated, isothermal blade. Heat fluxes were determined from a calibrated resistance layer in conjunction with a surface temperature measured by calibrated liquid crystals. The results show the effects of strong secondary vortical flows, laminar-to-turbulent transition, shock impingement, and increased inlet turbulence on the surface heat transfer.

  11. Preliminary analyses of space radiation protection for lunar base surface systems

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Wilson, John W.; Townsend, Lawrence W.

    1989-01-01

    Radiation shielding analyses are performed for candidate lunar base habitation modules. The study primarily addresses potential hazards due to contributions from the galactic cosmic rays. The NASA Langley Research Center's high energy nucleon and heavy ion transport codes are used to compute propagation of radiation through conventional and regolith shield materials. Computed values of linear energy transfer are converted to biological dose-equivalent using quality factors established by the International Commision of Radiological Protection. Special fluxes of heavy charged particles and corresponding dosimetric quantities are computed for a series of thicknesses in various shield media and are used as an input data base for algorithms pertaining to specific shielded geometries. Dosimetric results are presented as isodose contour maps of shielded configuration interiors. The dose predictions indicate that shielding requirements are substantial, and an abbreviated uncertainty analysis shows that better definition of the space radiation environment as well as improvement in nuclear interaction cross-section data can greatly increase the accuracy of shield requirement predictions.

  12. Gear crack propagation investigations

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; Ballarini, Roberto

    1996-01-01

    Analytical and experimental studies were performed to investigate the effect of gear rim thickness on crack propagation life. The FRANC (FRacture ANalysis Code) computer program was used to simulate crack propagation. The FRANC program used principles of linear elastic fracture mechanics, finite element modeling, and a unique re-meshing scheme to determine crack tip stress distributions, estimate stress intensity factors, and model crack propagation. Various fatigue crack growth models were used to estimate crack propagation life based on the calculated stress intensity factors. Experimental tests were performed in a gear fatigue rig to validate predicted crack propagation results. Test gears were installed with special crack propagation gages in the tooth fillet region to measure bending fatigue crack growth. Good correlation between predicted and measured crack growth was achieved when the fatigue crack closure concept was introduced into the analysis. As the gear rim thickness decreased, the compressive cyclic stress in the gear tooth fillet region increased. This retarded crack growth and increased the number of crack propagation cycles to failure.

  13. Numerical simulation of mushrooms during freezing using the FEM and an enthalpy: Kirchhoff formulation

    NASA Astrophysics Data System (ADS)

    Santos, M. V.; Lespinard, A. R.

    2011-12-01

    The shelf life of mushrooms is very limited since they are susceptible to physical and microbial attack; therefore they are usually blanched and immediately frozen for commercial purposes. The aim of this work was to develop a numerical model using the finite element technique to predict freezing times of mushrooms considering the actual shape of the product. The original heat transfer equation was reformulated using a combined enthalpy-Kirchhoff formulation, therefore an own computational program using Matlab 6.5 (MathWorks, Natick, Massachusetts) was developed, considering the difficulties encountered when simulating this non-linear problem in commercial softwares. Digital images were used to generate the irregular contour and the domain discretization. The numerical predictions agreed with the experimental time-temperature curves during freezing of mushrooms (maximum absolute error <3.2°C) obtaining accurate results and minimum computer processing times. The codes were then applied to determine required processing times for different operating conditions (external fluid temperatures and surface heat transfer coefficients).

  14. Vector Potential Generation for Numerical Relativity Simulations

    NASA Astrophysics Data System (ADS)

    Silberman, Zachary; Faber, Joshua; Adams, Thomas; Etienne, Zachariah; Ruchlin, Ian

    2017-01-01

    Many different numerical codes are employed in studies of highly relativistic magnetized accretion flows around black holes. Based on the formalisms each uses, some codes evolve the magnetic field vector B, while others evolve the magnetic vector potential A, the two being related by the curl: B=curl(A). Here, we discuss how to generate vector potentials corresponding to specified magnetic fields on staggered grids, a surprisingly difficult task on finite cubic domains. The code we have developed solves this problem in two ways: a brute-force method, whose scaling is nearly linear in the number of grid cells, and a direct linear algebra approach. We discuss the success both algorithms have in generating smooth vector potential configurations and how both may be extended to more complicated cases involving multiple mesh-refinement levels. NSF ACI-1550436

  15. Numerical analysis of the angular motion of a neutrally buoyant spheroid in shear flow at small Reynolds numbers.

    PubMed

    Rosén, T; Einarsson, J; Nordmark, A; Aidun, C K; Lundell, F; Mehlig, B

    2015-12-01

    We numerically analyze the rotation of a neutrally buoyant spheroid in a shear flow at small shear Reynolds number. Using direct numerical stability analysis of the coupled nonlinear particle-flow problem, we compute the linear stability of the log-rolling orbit at small shear Reynolds number Re(a). As Re(a)→0 and as the box size of the system tends to infinity, we find good agreement between the numerical results and earlier analytical predictions valid to linear order in Re(a) for the case of an unbounded shear. The numerical stability analysis indicates that there are substantial finite-size corrections to the analytical results obtained for the unbounded system. We also compare the analytical results to results of lattice Boltzmann simulations to analyze the stability of the tumbling orbit at shear Reynolds numbers of order unity. Theory for an unbounded system at infinitesimal shear Reynolds number predicts a bifurcation of the tumbling orbit at aspect ratio λ(c)≈0.137 below which tumbling is stable (as well as log rolling). The simulation results show a bifurcation line in the λ-Re(a) plane that reaches λ≈0.1275 at the smallest shear Reynolds number (Re(a)=1) at which we could simulate with the lattice Boltzmann code, in qualitative agreement with the analytical results.

  16. Unsteady Aerodynamic Models for Turbomachinery Aeroelastic and Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Verdon, Joseph M.; Barnett, Mark; Ayer, Timothy C.

    1995-01-01

    Theoretical analyses and computer codes are being developed for predicting compressible unsteady inviscid and viscous flows through blade rows of axial-flow turbomachines. Such analyses are needed to determine the impact of unsteady flow phenomena on the structural durability and noise generation characteristics of the blading. The emphasis has been placed on developing analyses based on asymptotic representations of unsteady flow phenomena. Thus, high Reynolds number flows driven by small amplitude unsteady excitations have been considered. The resulting analyses should apply in many practical situations and lead to a better understanding of the relevant flow physics. In addition, they will be efficient computationally, and therefore, appropriate for use in aeroelastic and aeroacoustic design studies. Under the present effort, inviscid interaction and linearized inviscid unsteady flow models have been formulated, and inviscid and viscid prediction capabilities for subsonic steady and unsteady cascade flows have been developed. In this report, we describe the linearized inviscid unsteady analysis, LINFLO, the steady inviscid/viscid interaction analysis, SFLOW-IVI, and the unsteady viscous layer analysis, UNSVIS. These analyses are demonstrated via application to unsteady flows through compressor and turbine cascades that are excited by prescribed vortical and acoustic excitations and by prescribed blade vibrations. Recommendations are also given for the future research needed for extending and improving the foregoing asymptotic analyses, and to meet the goal of providing efficient inviscid/viscid interaction capabilities for subsonic and transonic unsteady cascade flows.

  17. Ab initio calculations of the structural, electronic, thermodynamic and thermal properties of BaSe1-x Te x alloys

    NASA Astrophysics Data System (ADS)

    Drablia, S.; Boukhris, N.; Boulechfar, R.; Meradji, H.; Ghemid, S.; Ahmed, R.; Omran, S. Bin; El Haj Hassan, F.; Khenata, R.

    2017-10-01

    The alkaline earth metal chalcogenides are being intensively investigated because of their advanced technological applications, for example in photoluminescent devices. In this study, the structural, electronic, thermodynamic and thermal properties of the BaSe1-x Te x alloys at alloying composition x = 0, 0.25, 0.50, 0.75 and 1 are investigated. The full potential linearized augmented plane wave plus local orbital method designed within the density functional theory was used to perform the total energy calculations. In this research work the effect of the composition on the results of the parameters and bulk modulus as well as on the band gap energy is analyzed. From our results, we found a deviation of the obtained results for the lattice constants from Vegard’s law as well as a deviation of the value of the bulk modulus from the linear concentration dependence. We also carried out a microscopic analysis of the origin of the band gap energy bowing parameter. Furthermore, the thermodynamic stability of the considered alloys was explored through the measurement of the miscibility critical temperature. The quasi-harmonic Debye model, as implemented in the Gibbs code, was used to predict the thermal properties of the BaSe1-x Te x alloys, and these investigations comprise our first theoretical predictions concerning the BaSe1-x Te x alloys.

  18. Simulating the performance of a distance-3 surface code in a linear ion trap

    NASA Astrophysics Data System (ADS)

    Trout, Colin J.; Li, Muyuan; Gutiérrez, Mauricio; Wu, Yukai; Wang, Sheng-Tao; Duan, Luming; Brown, Kenneth R.

    2018-04-01

    We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of ≥99.9% for the logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis of the error subsets from the importance sampling method used to bound the logical error rates to gain insight into which error sources are particularly detrimental to error correction.

  19. Integrated system for production of neutronics and photonics calculational constants. Program SIGMA1 (Version 77-1): Doppler broaden evaluated cross sections in the Evaluated Nuclear Data File/Version B (ENDF/B) format

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, D.E.

    1977-01-12

    A code, SIGMA1, has been designed to Doppler broaden evaluated cross sections in the ENDF/B format. The code can only be applied to tabulated data that vary linearly in energy and cross section between tabulated points. This report describes the methods used in the code and serves as a user's guide to the code.

  20. Simulation of nonlinear propagation of biomedical ultrasound using PZFlex and the KZK Texas code

    NASA Astrophysics Data System (ADS)

    Qiao, Shan; Jackson, Edward; Coussios, Constantin-C.; Cleveland, Robin

    2015-10-01

    In biomedical ultrasound nonlinear acoustics can be important in both diagnostic and therapeutic applications and robust simulations tools are needed in the design process but also for day-to-day use such as treatment planning. For most biomedical application the ultrasound sources generate focused sound beams of finite amplitude. The KZK equation is a common model as it accounts for nonlinearity, absorption and paraxial diffraction and there are a number of solvers available, primarily developed by research groups. We compare the predictions of the KZK Texas code (a finite-difference time-domain algorithm) to an FEM-based commercial software, PZFlex. PZFlex solves the continuity equation and momentum conservation equation with a correction for nonlinearity in the equation of state incorporated using an incrementally linear, 2nd order accurate, explicit algorithm in time domain. Nonlinear ultrasound beams from two transducers driven at 1 MHz and 3.3 MHz respectively were simulated by both the KZK Texas code and PZFlex, and the pressure field was also measured by a fibre-optic hydrophone to validate the models. Further simulations were carried out a wide range of frequencies. The comparisons showed good agreement for the fundamental frequency for PZFlex, the KZK Texas code and the experiments. For the harmonic components, the KZK Texas code was in good agreement with measurements but PZFlex underestimated the amplitude: 32% for the 2nd harmonic and 66% for the 3rd harmonic. The underestimation of harmonics by PZFlex was more significant when the fundamental frequency increased. Furthermore non-physical oscillations in the axial profile of harmonics occurred in the PZFlex results when the amplitudes were relatively low. These results suggest that careful benchmarking of nonlinear simulations is important.

Top