Science.gov

Sample records for distributed coding model

  1. Application of the TEMPEST computer code for simulating hydrogen distribution in model containment structures. [PWR; BWR

    SciTech Connect

    Trent, D.S.; Eyler, L.L.

    1982-09-01

    In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.

  2. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  3. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  4. 3-D model-based frame interpolation for distributed video coding of static scenes.

    PubMed

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  5. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  6. A distributed particle simulation code in C++

    SciTech Connect

    Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.

    1992-03-01

    Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.

  7. Implementation of a double Gaussian source model for the BEAMnrc Monte Carlo code and its influence on small fields dose distributions.

    PubMed

    Doerner, Edgardo; Caprile, Paola

    2016-01-01

    The shape of the radiation source of a linac has a direct impact on the delivered dose distributions, especially in the case of small radiation fields. Traditionally, a single Gaussian source model is used to describe the electron beam hitting the target, although different studies have shown that the shape of the electron source can be better described by a mixed distribution consisting of two Gaussian components. Therefore, this study presents the implementation of a double Gaussian source model into the BEAMnrc Monte Carlo code. The impact of the double Gaussian source model for a 6 MV beam is assessed through the comparison of different dosimetric parameters calculated using a single Gaussian source, previously com-missioned, the new double Gaussian source model and measurements, performed with a diode detector in a water phantom. It was found that the new source can be easily implemented into the BEAMnrc code and that it improves the agreement between measurements and simulations for small radiation fields. The impact of the change in source shape becomes less important as the field size increases and for increasing distance of the collimators to the source, as expected. In particular, for radiation fields delivered using stereotactic collimators located at a distance of 59 cm from the source, it was found that the effect of the double Gaussian source on the calculated dose distributions is negligible, even for radiation fields smaller than 5 mm in diameter. Accurate determination of the shape of the radiation source allows us to improve the Monte Carlo modeling of the linac, especially for treatment modalities such as IMRT, were the radiation beams used could be very narrow, becoming more sensitive to the shape of the source. PMID:27685141

  8. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  9. Cheetah: Starspot modeling code

    NASA Astrophysics Data System (ADS)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  10. The weight distribution and randomness of linear codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1989-01-01

    Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all.

  11. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  12. Sparsey™: event recognition via deep hierarchical sparse distributed codes

    PubMed Central

    Rinkus, Gerard J.

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal

  13. Optimal source codes for geometrically distributed integer alphabets

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  14. Random aggregation models for the formation and evolution of coding and non-coding DNA

    NASA Astrophysics Data System (ADS)

    Provata, A.

    A random aggregation model with influx is proposed for the formation of the non-coding DNA regions via random co-aggregation and influx of biological macromolecules such as viruses, parasite DNA, and replication segments. The constant mixing (transpositions) and influx drives the system in an out-of-equilibrium steady state characterised by a power law size distribution. The model predicts the long range distributions found in the noncoding eucaryotic DNA and explains the observed correlations. For the formation of coding DNA a random closed aggregation model is proposed which predicts short range coding size distributions. The closed aggregation process drives the system in an almost “frozen” stable state which is robust to external perturbations and which is characterised by well defined space and time scales, as observed in coding sequences.

  15. Modeling of Dose Distribution for a Proton Beam Delivering System with the use of the Multi-Particle Transport Code 'Fluka'

    SciTech Connect

    Mumot, Marta; Agapov, Alexey

    2007-11-26

    We have developed a new delivering system for hadron therapy which uses a multileaf collimator and a range shifter. We simulate our delivering beam system with the multi-particle transport code 'Fluka'. From these simulations we obtained information about the dose distributions, about stars generated in the delivering system elements and also information about the neutron flux. All the informations obtained were analyzed from the point of view of radiation protection, homogeneity of beam delivery to patient body, and also in order to improve some modifiers used.

  16. Dynamic alignment models for neural coding.

    PubMed

    Kollmorgen, Sepp; Hahnloser, Richard H R

    2014-03-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  17. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  18. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  19. Binary weight distributions of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1992-01-01

    The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.

  20. Update and inclusion of resuspension model codes

    SciTech Connect

    Porch, W.M.; Greenly, G.D.; Mitchell, C.S.

    1983-12-01

    Model codes for estimating radiation doses from plutonium particles associated with resuspended dust were improved. Only one new code (RSUS) is required in addition to the MATHEW/ADPIC set of codes. The advantage is that it estimates resuspension based on wind blown dust fluxes derived for different soil types. 2 references. (ACR)

  1. Codon Distribution in Error-Detecting Circular Codes.

    PubMed

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  2. Codon Distribution in Error-Detecting Circular Codes.

    PubMed

    Fimmel, Elena; Strüngmann, Lutz

    2016-01-01

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes. PMID:26999215

  3. Codon Distribution in Error-Detecting Circular Codes

    PubMed Central

    Fimmel, Elena; Strüngmann, Lutz

    2016-01-01

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes. PMID:26999215

  4. Material model library for explicit numerical codes

    SciTech Connect

    Hofmann, R.; Dial, B.W.

    1982-08-01

    A material model logic structure has been developed which is useful for most explicit finite-difference and explicit finite-element Lagrange computer codes. This structure has been implemented and tested in the STEALTH codes to provide an example for researchers who wish to implement it in generically similar codes. In parallel with these models, material parameter libraries have been created for the implemented models for materials which are often needed in DoD applications.

  5. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  6. TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report

    SciTech Connect

    Trent, D.S.; Eyler, L.L.

    1985-03-01

    The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.

  7. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  8. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  9. Genetic coding and gene expression - new Quadruplet genetic coding model

    NASA Astrophysics Data System (ADS)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  10. Non-coding RNAs and complex distributed genetic networks

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir

    2011-08-01

    In eukaryotic cells, the mRNA-protein interplay can be dramatically influenced by non-coding RNAs (ncRNAs). Although this new paradigm is now widely accepted, an understanding of the effect of ncRNAs on complex genetic networks is lacking. To clarify what may happen in this case, we propose a mean-field kinetic model describing the influence of ncRNA on a complex genetic network with a distributed architecture including mutual protein-mediated regulation of many genes transcribed into mRNAs. ncRNA is considered to associate with mRNAs and inhibit their translation and/or facilitate degradation. Our results are indicative of the richness of the kinetics under consideration. The main complex features are found to be bistability and oscillations. One could expect to find kinetic chaos as well. The latter feature has however not been observed in our calculations. In addition, we illustrate the difference in the regulation of distributed networks by mRNA and ncRNA.

  11. COLD-SAT Dynamic Model Computer Code

    NASA Technical Reports Server (NTRS)

    Bollenbacher, G.; Adams, N. S.

    1995-01-01

    COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.

  12. Model Policy on Student Publications Code.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Education, Des Moines.

    In 1989, the Iowa Legislature created a new code section that defines and regulates student exercise of free expression in "official school publications." Also, the Iowa State Department of Education was directed to develop a model publication code that includes reasonable provisions for regulating the time, place, and manner of student…

  13. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  14. Distributed quantum dense coding with two receivers in noisy environments

    NASA Astrophysics Data System (ADS)

    Das, Tamoghna; Prabhu, R.; SenDe, Aditi; Sen, Ujjwal

    2015-11-01

    We investigate the effect of noisy channels in a classical information transfer through a multipartite state which acts as a substrate for the distributed quantum dense coding protocol between several senders and two receivers. The situation is qualitatively different from the case with one or more senders and a single receiver. We obtain an upper bound on the multipartite capacity which is tightened in the case of the covariant noisy channel. We also establish a relation between the genuine multipartite entanglement of the shared state and the capacity of distributed dense coding using that state, both in the noiseless and the noisy scenarios. Specifically, we find that, in the case of multiple senders and two receivers, the corresponding generalized Greenberger-Horne-Zeilinger states possess higher dense coding capacities as compared to a significant fraction of pure states having the same multipartite entanglement.

  15. Generation of Java code from Alvis model

    NASA Astrophysics Data System (ADS)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  16. Distributed generation systems model

    SciTech Connect

    Barklund, C.R.

    1994-12-31

    A slide presentation is given on a distributed generation systems model developed at the Idaho National Engineering Laboratory, and its application to a situation within the Idaho Power Company`s service territory. The objectives of the work were to develop a screening model for distributed generation alternatives, to develop a better understanding of distributed generation as a utility resource, and to further INEL`s understanding of utility concerns in implementing technological change.

  17. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model...

  18. Mathematical models for the EPIC code

    SciTech Connect

    Buchanan, H.L.

    1981-06-03

    EPIC is a fluid/envelope type computer code designed to study the energetics and dynamics of a high energy, high current electron beam passing through a gas. The code is essentially two dimensional (x, r, t) and assumes an axisymmetric beam whose r.m.s. radius is governed by an envelope model. Electromagnetic fields, background gas chemistry, and gas hydrodynamics (density channel evolution) are all calculated self-consistently as functions of r, x, and t. The code is a collection of five major subroutines, each of which is described in some detail in this report.

  19. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  20. Distributed fuzzy system modeling

    SciTech Connect

    Pedrycz, W.; Chi Fung Lam, P.; Rocha, A.F.

    1995-05-01

    The paper introduces and studies an idea of distributed modeling treating it as a new paradigm of fuzzy system modeling and analysis. This form of modeling is oriented towards developing individual (local) fuzzy models for specific modeling landmarks (expressed as fuzzy sets) and determining the essential logical relationships between these local models. The models themselves are implemented in the form of logic processors being regarded as specialized fuzzy neural networks. The interaction between the processors is developed either in an inhibitory or excitatory way. In more descriptive way, the distributed model can be sought as a collection of fuzzy finite state machines with their individual local first or higher order memories. It is also clarified how the concept of distributed modeling narrows down a gap between purely numerical (quantitative) models and the qualitative ones originated within the realm of Artificial Intelligence. The overall architecture of distributed modeling is discussed along with the detailed learning schemes. The results of extensive simulation experiments are provided as well. 17 refs.

  1. Complex phylogenetic distribution of a non-canonical genetic code in green algae

    PubMed Central

    2010-01-01

    Background A non-canonical nuclear genetic code, in which TAG and TAA have been reassigned from stop codons to glutamine, has evolved independently in several eukaryotic lineages, including the ulvophycean green algal orders Dasycladales and Cladophorales. To study the phylogenetic distribution of the standard and non-canonical genetic codes, we generated sequence data of a representative set of ulvophycean green algae and used a robust green algal phylogeny to evaluate different evolutionary scenarios that may account for the origin of the non-canonical code. Results This study demonstrates that the Dasycladales and Cladophorales share this alternative genetic code with the related order Trentepohliales and the genus Blastophysa, but not with the Bryopsidales, which is sister to the Dasycladales. This complex phylogenetic distribution whereby all but one representative of a single natural lineage possesses an identical deviant genetic code is unique. Conclusions We compare different evolutionary scenarios for the complex phylogenetic distribution of this non-canonical genetic code. A single transition to the non-canonical code followed by a reversal to the canonical code in the Bryopsidales is highly improbable due to the profound genetic changes that coincide with codon reassignment. Multiple independent gains of the non-canonical code, as hypothesized for ciliates, are also unlikely because the same deviant code has evolved in all lineages. Instead we favor a stepwise acquisition model, congruent with the ambiguous intermediate model, whereby the non-canonical code observed in these green algal orders has a single origin. We suggest that the final steps from an ambiguous intermediate situation to a non-canonical code have been completed in the Trentepohliales, Dasycladales, Cladophorales and Blastophysa but not in the Bryopsidales. We hypothesize that in the latter lineage an initial stage characterized by translational ambiguity was not followed by final

  2. Specifications of a Plasmasphere Modeling Code for GGCM

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.; Ober, D.

    2000-01-01

    The Dynamic Global Core Plasma Model (DGCPM) is a parameterized model for core or thermal plasma in the magnetosphere. The model accounts for dayside ionospheric outflow and nightside inflow. It accounts for the global pattern of convection and corotation. The model is capable of being coupled to ring current and superthermal electron models for the purpose of providing thermal plasma spatial distributions and for the purpose of accepting the dynamic influences of these plasma populations back upon the thermal plasma. The DGCPM is designed to operate alone or to operate as part of a larger integrated package. The convection electric field and magnetic field used within the DGCPM can be shared with models of other plasma populations, in addition to the exchange of parameters important to the collective modeling of whole plasma systems in the inner magnetosphere. This talk will present the features of the DGCPM model code and the various forms of information that can be exchanged with other cooperating codes.

  3. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  4. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  5. Internal Dosimetry Code System Using Biokinetics Models

    2003-11-12

    Version 00 InDose is an internal dosimetry code to calculate dose estimations using biokinetic models (presented in ICRP-56 to ICRP71) as well as older ones. The code uses the ICRP-66 respiratory tract model and the ICRP-30 gastrointestinal tract model as well as the new and old biokinetic models. The code was written in such a way that the user can change any parameters of any one of the models without recompiling the code. All parametersmore » are given in well annotated parameters files that the user may change. As default, these files contain the values listed in ICRP publications. The full InDose code was planned to have three parts: 1) the main part includes the uptake and systemic models and is used to calculate the activities in the body tissues and excretion as a function of time for a given intake. 2) An optimization module for automatic estimation of the intake for a specific exposure case. 3) A module to calculate the dose due to the estimated intake. Currently, the code is able to perform only it`s main task (part 1) while the other two have to be done externally using other tools. In the future, developers would like to add these modules in order to provide a complete solution. The code was tested extensively to verify accuracy of its results. The verification procedure was divided into three parts: 1) verification of the implementation of each model, 2) verification of the integrity of the whole code, and 3) usability test. The first two parts consisted of comparing results obtained with InDose to published results for the same cases. For example ICRP-78 monitoring data. The last part consisted of participating in the 3rd EIE-IDA and assessing some of the scenarios provided in this exercise. These tests where presented in a few publications. Good agreement was found between the results of InDose and published data.« less

  6. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  7. Spherical-code key-distribution protocols for qubits

    SciTech Connect

    Renes, Joseph M.

    2004-11-01

    Recently spherical codes were introduced as potentially more capable ensembles for quantum key distribution. Here we develop specific key-creation protocols for the two qubit-based spherical codes, the trine and tetrahedron, and analyze them in the context of a suitably tailored intercept/resend attack, both in standard form, and in a 'gentler' version whose back action on the quantum state is weaker. When compared to the standard unbiased basis protocols, Bennett-Brassard 1984 (BB84) and six-state, two distinct advantages are found. First, they offer improved tolerance of eavesdropping, the trine besting its counterpart BB84 and the tetrahedron the six-state protocol. Second, the key error rate may be computed from the sift rate of the protocol itself, removing the need to sacrifice key bits for this purpose. This simplifies the protocol and improves the overall key rate.0.

  8. Distributed magnetic field positioning system using code division multiple access

    NASA Technical Reports Server (NTRS)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  9. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  10. Robust video transmission with distributed source coded auxiliary channel.

    PubMed

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  11. Behavioral correlates of the distributed coding of spatial context.

    PubMed

    Anderson, Michael I; Killing, Sarah; Morris, Caitlin; O'Donoghue, Alan; Onyiagha, Dikennam; Stevenson, Rosemary; Verriotis, Madeleine; Jeffery, Kathryn J

    2006-01-01

    Hippocampal place cells respond heterogeneously to elemental changes of a compound spatial context, suggesting that they form a distributed code of context, whereby context information is shared across a population of neurons. The question arises as to what this distributed code might be useful for. The present study explored two possibilities: one, that it allows contexts with common elements to be disambiguated, and the other, that it allows a given context to be associated with more than one outcome. We used two naturalistic measures of context processing in rats, rearing and thigmotaxis (boundary-hugging), to explore how rats responded to contextual novelty and to relate this to the behavior of place cells. In experiment 1, rats showed dishabituation of rearing to a novel reconfiguration of familiar context elements, suggesting that they perceived the reconfiguration as novel, a behavior that parallels that of place cells in a similar situation. In experiment 2, rats were trained in a place preference task on an open-field arena. A change in the arena context triggered renewed thigmotaxis, and yet navigation continued unimpaired, indicating simultaneous representation of both the altered contextual and constant spatial cues. Place cells similarly exhibited a dual population of responses, consistent with the hypothesis that their activity underlies spatial behavior. Together, these experiments suggest that heterogeneous context encoding (or "partial remapping") by place cells may function to allow the flexible assignment of associations to contexts, a faculty that could be useful in episodic memory encoding. PMID:16921500

  12. Non-extensive trends in the size distribution of coding and non-coding DNA sequences in the human genome

    NASA Astrophysics Data System (ADS)

    Oikonomou, Th.; Provata, A.

    2006-03-01

    We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using non-extensive statistics. We show that the exponents governing the spatial decay of the coding size distributions vary between 5.2 ≤r ≤5.7 for the short scales and 1.45 ≤q ≤1.50 for the large scales. On the contrary, the exponents governing the spatial decay of the non-coding size distributions in these four chromosomes, take the values 2.4 ≤r ≤3.2 for the short scales and 1.50 ≤q ≤1.72 for the large scales. These results, in particular the values of the tail exponent q, indicate the existence of correlations in the coding and non-coding size distributions with tendency for higher correlations in the non-coding DNA.

  13. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...

  14. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...

  15. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...

  16. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...

  17. Pressure distribution based optimization of phase-coded acoustical vortices

    SciTech Connect

    Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong

    2014-02-28

    Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.

  18. Rapid installation of numerical models in multiple parent codes

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  19. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  20. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  1. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  2. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  3. Models of distributive justice.

    PubMed

    Wolff, Jonathan

    2007-01-01

    Philosophical disagreement about justice rages over at least two questions. The most immediate is a substantial question, concerning the conditions under which particular distributive arrangements can be said to be just or unjust. The second, deeper, question concerns the nature of justice itself. What is justice? Here we can distinguish three views. First, justice as mutual advantage sees justice as essentially a matter of the outcome of a bargain. There are times when two parties can both be better off by making some sort of agreement. Justice, on this view, concerns the distribution of the benefits and burdens of the agreement. Second, justice as reciprocity takes a different approach, looking not at bargaining but at the idea of a fair return or just price, attempting to capture the idea of justice as equal exchange. Finally justice as impartiality sees justice as 'taking the other person's point of view' asking 'how would you like it if it happened to you?' Each model has significantly different consequences for the question of when issues of justice arise and how they should be settled. It is interesting to consider whether any of these models of justice could regulate behaviour between non-human animals.

  4. FPGA based digital phase-coding quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu

    2015-12-01

    Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.

  5. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  6. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.

  7. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. PMID:26098126

  8. Plutonium explosive dispersal modeling using the MACCS2 computer code

    SciTech Connect

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  9. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  10. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    SciTech Connect

    Rakhno, I. L.; Mokhov, N. V.; Gudima, K. K.

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  11. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  12. On the binary weight distribution of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    Consider an (n,k) linear code with symbols from GF(2 sup M). If each code symbol is represented by a m-tuple over GF(2) using certain basis for GF(2 sup M), a binary (nm,km) linear code is obtained. The weight distribution of a binary linear code obtained in this manner is investigated. Weight enumerators for binary linear codes obtained from Reed-Solomon codes over GF(2 sup M) generated by polynomials, (X-alpha), (X-l)(X-alpha), (X-alpha)(X-alpha squared) and (X-l)(X-alpha)(X-alpha squared) and their extended codes are presented, where alpha is a primitive element of GF(2 sup M). Binary codes derived from Reed-Solomon codes are often used for correcting multiple bursts of errors.

  13. Simple models for reading neuronal population codes.

    PubMed Central

    Seung, H S; Sompolinsky, H

    1993-01-01

    In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models. PMID:8248166

  14. Astrophysical Plasmas: Codes, Models, and Observations

    NASA Astrophysics Data System (ADS)

    Canto, Jorge; Rodriguez, Luis F.

    2000-05-01

    The conference Astrophysical Plasmas: Codes, Models, and Observations was aimed at discussing the most recent advances, arid some of the avenues for future work, in the field of cosmical plasmas. It was held (hiring the week of October 25th to 29th 1999, at the Centro Nacional de las Artes (CNA) in Mexico City, Mexico it modern and impressive center of theaters and schools devoted to the performing arts. This was an excellent setting, for reviewing the present status of observational (both on earth and in space) arid theoretical research. as well as some of the recent advances of laboratory research that are relevant, to astrophysics. The demography of the meeting was impressive: 128 participants from 12 countries in 4 continents, a large fraction of them, 29% were women and most of them were young persons (either recent Ph.Ds. or graduate students). This created it very lively and friendly atmosphere that made it easy to move from the ionization of the Universe and high-redshift absorbers, to Active Galactic Nucleotides (AGN)s and X-rays from galaxies, to the gas in the Magellanic Clouds and our Galaxy, to the evolution of H II regions and Planetary Nebulae (PNe), and to the details of plasmas in the Solar System and the lab. All these topics were well covered with 23 invited talks, 43 contributed talks. and 22 posters. Most of them are contained in these proceedings, in the same order of the presentations.

  15. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT...

  16. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in... Portions of the CABO One and Two Family Dwelling Code, 1992 Edition, including the 1993 amendments, with... Chapter 3. (e) Materials standards Chapter 26. (f) Construction components Part III. (g) Glass Chapter...

  17. Review and verification of CARE 3 mathematical model and code

    NASA Technical Reports Server (NTRS)

    Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.

    1983-01-01

    The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.

  18. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  19. Numerical MHD codes for modeling astrophysical flows

    NASA Astrophysics Data System (ADS)

    Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.

    2016-05-01

    We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.

  20. Status report on the THROHPUT transient heat pipe modeling code

    SciTech Connect

    Hall, M.L.; Merrigan, M.A.; Reid, R.S.

    1993-11-01

    Heat pipes are structures which transport heat by the evaporation and condensation of a working fluid, giving them a high effective thermal conductivity. Many space-based uses for heat pipes have been suggested, and high temperature heat pipes using liquid metals as working fluids are especially attractive for these purposes. These heat pipes are modeled by the THROHPUT code (THROHPUT is an acronym for Thermal Hydraulic Response Of Heat Pipes Under Transients and is pronounced like ``throughput``). Improvements have been made to the THROHPUT code which models transient thermohydraulic heat pipe behavior. The original code was developed as a doctoral thesis research code by Hall. The current emphasis has been shifted from research into the numerical modeling to the development of a robust production code. Several modeling obstacles that were present in the original code have been eliminated, and several additional features have been added.

  1. Energy standards and model codes development, adoption, implementation, and enforcement

    SciTech Connect

    Conover, D.R.

    1994-08-01

    This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.

  2. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....

  3. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....

  4. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....

  5. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....

  6. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....

  7. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  8. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  9. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  10. The GNASH preequilibrium-statistical nuclear model code

    SciTech Connect

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on /sup 58/Ni. the second illustrates the fission model capabilities implemented in the code and involves n + /sup 235/U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs.

  11. Automatic code generation from the OMT-based dynamic model

    SciTech Connect

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  12. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  13. A velocity-dependent anomalous radial transport model for (2-D, 2-V) kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, Kowsik; Krasheninnikov, Sergei; Cohen, Ron; Rognlien, Tom

    2008-11-01

    Plasma turbulence constitutes a significant part of radial plasma transport in magnetically confined plasmas. This turbulent transport is modeled in the form of anomalous convection and diffusion coefficients in fluid transport codes. There is a need to model the same in continuum kinetic edge codes [such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory] with non-Maxwellian distributions. We present an anomalous transport model with velocity-dependent convection and diffusion coefficients leading to a diagonal transport matrix similar to that used in contemporary fluid transport models (e.g., UEDGE). Also presented are results of simulations corresponding to radial transport due to long-wavelength ExB turbulence using a velocity-independent diffusion coefficient. A BGK collision model is used to enable comparison with fluid transport codes.

  14. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  15. Modeling Planet-Building Stellar Disks with Radiative Transfer Code

    NASA Astrophysics Data System (ADS)

    Swearingen, Jeremy R.; Sitko, Michael L.; Whitney, Barbara; Grady, Carol A.; Wagner, Kevin Robert; Champney, Elizabeth H.; Johnson, Alexa N.; Warren, Chelsea C.; Russell, Ray W.; Hammel, Heidi B.; Lisse, Casey M.; Cure, Michel; Kraus, Stefan; Fukagawa, Misato; Calvet, Nuria; Espaillat, Catherine; Monnier, John D.; Millan-Gabet, Rafael; Wilner, David J.

    2015-01-01

    Understanding the nature of the many planetary systems found outside of our own solar system cannot be completed without knowledge of the beginnings these systems. By detecting planets in very young systems and modeling the disks of material around stars from which they form, we can gain a better understanding of planetary origin and evolution. The efforts presented here have been in modeling two pre-transitional disk systems using a radiative transfer code. With the first of these systems, V1247 Ori, a model that fits the spectral energy distribution (SED) well and whose parameters are consistent with existing interferometry data (Kraus et al 2013) has been achieved. The second of these two systems, SAO 206462, has presented a different set of challenges but encouraging SED agreement between the model and known data gives hope that the model can produce images that can be used in future interferometry work. This work was supported by NASA ADAP grant NNX09AC73G, and the IR&D program at The Aerospace Corporation.

  16. Adaptive Zero-Coefficient Distribution Scan for Inter Block Mode Coding of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Xin; Su, Alvin W. Y.

    Scanning quantized transform coefficients is an important tool for video coding. For example, the MPEG-4 video coder adopts three different scans to get better coding efficiency. This paper proposes an adaptive zero-coefficient distribution scan in inter block coding. The proposed method attempts to improve H.264/AVC zero coefficient coding by modifying the scan operation. Since the zero-coefficient distribution is changed by the proposed scan method, new VLC tables for syntax elements used in context-adaptive variable length coding (CAVLC) are also provided. The savings in bit-rate range from 2.2% to 5.1% in the high bit-rate cases, depending on different test sequences.

  17. 28 CFR 36.608 - Guidance concerning model codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608 Section 36.608 Judicial Administration DEPARTMENT OF JUSTICE NONDISCRIMINATION ON THE BASIS OF DISABILITY BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.608 Guidance concerning...

  18. Computer code for the calculation of the temperature distribution of cooled turbine blades

    NASA Astrophysics Data System (ADS)

    Tietz, Thomas A.; Koschel, Wolfgang W.

    A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.

  19. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  20. Cavitation Modeling in Euler and Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  1. SAMICS marketing and distribution model

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.

  2. Modeling anomalous radial transport in kinetic transport codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2009-11-01

    Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.

  3. Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms

    PubMed Central

    Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.

    2013-01-01

    Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162

  4. Modeling Natural Variation through Distribution

    ERIC Educational Resources Information Center

    Lehrer, Richard; Schauble, Leona

    2004-01-01

    This design study tracks the development of student thinking about natural variation as late elementary grade students learned about distribution in the context of modeling plant growth at the population level. The data-modeling approach assisted children in coordinating their understanding of particular cases with an evolving notion of data as an…

  5. Modeled ground water age distributions

    USGS Publications Warehouse

    Woolfenden, Linda R.; Ginn, Timothy R.

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  6. Verification of thermal analysis codes for modeling solid rocket nozzles

    NASA Technical Reports Server (NTRS)

    Keyhani, M.

    1993-01-01

    One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher

  7. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  8. Modeling Nucleon Generalized Parton Distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We find that the usual "DD+D-term'' construction should be amended by an extra term, generated by GPD E(x,\\xi). Unlike the $D$-term, this function has support in the whole -1 < x< 1 region, and in general does not vanish at the border points|x|=\\xi.

  9. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    SciTech Connect

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  10. Modeling Guidelines for Code Generation in the Railway Signaling Context

    NASA Technical Reports Server (NTRS)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  11. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  12. Fluid-Rock Interaction Models: Code Release and Results

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.

    2006-12-01

    Numerical models our group has developed for understanding the role of kinetic processes during fluid-rock interaction will be released free to the public. We will also present results that highlight the importance of kinetic processes. The author is preparing manuals describing the numerical methods used, as well as "how-to" guides for using the models. The release will include input files, full in-line code documentation of the FORTRAN source code, and instructions for use of model output for visualization and analysis. The aqueous phase (weathering) and supercritical (mixed-volatile metamorphic) fluid flow and reaction models for porous media will be released separately. These codes will be useful as teaching and research tools. The codes may be run on current generation personal computers. Although other codes are available for attacking some of the problems we address, unique aspects of our codes include sub-grid-scale grain models to track grain size changes, as well as dynamic porosity and permeability. Also, as the flow field can change significantly over the course of the simulation, efficient solution methods have been developed for the repeated solution of Poisson-type equations that arise from Darcy's law. These include sparse-matrix methods as well as the even more efficient spectral-transform technique. Results will be presented for kinetic control of reaction pathways and for heterogeneous media. Codes and documentation for modeling intra-grain diffusion of trace elements and isotopes, and exchange of these between grains and moving fluids will also be released. The unique aspect of this model is that it includes concurrent diffusion and grain growth or dissolution for multiple mineral types (low-diffusion regridding has been developed to deal with the moving-boundary problem at the fluid/mineral interface). Results for finite diffusion rates will be compared to batch and fractional melting models. Additional code and documentation will be released

  13. The overlap model: a model of letter position coding.

    PubMed

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-07-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that the position of each letter within a word is perfectly encoded. Thus, these models are unable to explain the presence of effects of letter transposition (trial-trail), letter migration (beard-bread), repeated letters (moose-mouse), or subset/superset effects (faulty-faculty). The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with these effects. The basic assumption is that letters in the visual stimulus have distributions over positions so that the representation of one letter will extend into adjacent letter positions. To test the model, the authors conducted a series of forced-choice perceptual identification experiments. The overlap model produced very good fits to the empirical data, and even a simplified 2-parameter model was capable of producing fits for 104 observed data points with a correlation coefficient of .91.

  14. The Overlap Model: A Model of Letter Position Coding

    PubMed Central

    Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that the position of each letter within a word is perfectly encoded. Thus, these models are unable to explain the presence of effects of letter transposition (trial-trail), letter migration (beard-bread), repeated letters (moose-mouse), or subset/superset effects (faulty-faculty). The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with these effects. The basic assumption is that letters in the visual stimulus have distributions over positions so that the representation of one letter will extend into adjacent letter positions. To test the model, the authors conducted a series of forced-choice perceptual identification experiments. The overlap model produced very good fits to the empirical data, and even a simplified 2-parameter model was capable of producing fits for 104 observed data points with a correlation coefficient of .91. PMID:18729592

  15. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  16. SRVAL. Stock-Recruitment Model VALidation Code

    SciTech Connect

    Christensen, S.W.

    1989-12-07

    SRVAL is a computer simulation model of the Hudson River striped bass population. It was designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit-effort (CPUE) statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. SRVAL was developed to test such assertions and was utilized in testimony written in connection with the Hudson River Power Case (U. S. Environmental Protection Agency, Region II).

  17. Code System to Model Aqueous Geochemical Equilibria.

    2001-08-23

    Version: 00 MINTEQ is a geochemical program to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. It was developed for the Environmental Protection Agency to perform the calculations necessary to simulate the contact of waste solutions with heterogeneous sediments or the interaction of ground water with solidified wastes. MINTEQ can calculate ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution ofsolid phases. MINTEQ can accept a finite massmore » for any solid considered for dissolution and will dissolve the specified solid phase only until its initial mass is exhausted. This ability enables MINTEQ to model flow-through systems. In these systems the masses of solid phases that precipitate at earlier pore volumes can be dissolved at later pore volumes according to thermodynamic constraints imposed by the solution composition and solid phases present. The ability to model these systems permits evaluation of the geochemistry of dissolved traced metals, such as low-level waste in shallow land burial sites. MINTEQ was designed to solve geochemical equilibria for systems composed of one kilogram of water, various amounts of material dissolved in solution, and any solid materials that are present. Systems modeled using MINTEQ can exchange energy and material (open systems) or just energy (closed systems) with the surrounding environment. Each system is composed of a number of phases. Every phase is a region with distinct composition and physically definable boundaries. All of the material in the aqueous solution forms one phase. The gas phase is composed of any gaseous material present, and each compositionally and structurally distinct solid forms a separate phase.« less

  18. Video distribution system cost model

    NASA Technical Reports Server (NTRS)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  19. Modeling Nucleon Generalized Parton Distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We found that the usual "DD+D-term" construction should be amended by an extra term, xiE^1_+ (x,xi) built from the alpha/Beta moment of the DD e(Beta,alpha) that generates GPD E(x,xi). Unlike the D-term, this function has support in the whole -1< x<1 region, and in general does not vanish at the border points |x|=xi.

  20. Reduced Fast Ion Transport Model For The Tokamak Transport Code TRANSP

    SciTech Connect

    Podesta,, Mario; Gorelenkova, Marina; White, Roscoe

    2014-02-28

    Fast ion transport models presently implemented in the tokamak transport code TRANSP [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, CEC Brussels, 1 , 19 (1980)] are not capturing important aspects of the physics associated with resonant transport caused by instabilities such as Toroidal Alfv en Eigenmodes (TAEs). This work describes the implementation of a fast ion transport model consistent with the basic mechanisms of resonant mode-particle interaction. The model is formulated in terms of a probability distribution function for the particle's steps in phase space, which is consistent with the MonteCarlo approach used in TRANSP. The proposed model is based on the analysis of fast ion response to TAE modes through the ORBIT code [R. B. White et al., Phys. Fluids 27 , 2455 (1984)], but it can be generalized to higher frequency modes (e.g. Compressional and Global Alfv en Eigenmodes) and to other numerical codes or theories.

  1. Model-building codes for membrane proteins.

    SciTech Connect

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S.; Slepoy, Alexander; Sale, Kenneth L.; Young, Malin M.; Faulon, Jean-Loup Michel; Gray, Genetha Anne

    2005-01-01

    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  2. A code switching technique for distributed spread spectrum packet radio networks

    NASA Astrophysics Data System (ADS)

    Sousa, E. S.; Silvester, J. A.

    A protocol for the use of spreading codes in a spread spectrum packet radio network is presented. Throughput results for a single-hop homogeneous network in heavy traffic are given. With the protocol, each terminal is assigned two unique spreading codes: one that the terminal uses to monitor the channel when it is idle, and a different code that the terminal switches to after transmitting an initial addressing header, which is transmitted on the destination's monitoring code. Limiting throughput results are obtained. Under the assumption of exponentially distributed packet lengths a limiting throughput per terminal pair corresponding to a utilization of .3431 for a system with an infinite number of users and infinite bandwidth is obtained.

  3. Hybrid decode-amplify-forward (HDAF) scheme in distributed Alamouti-coded cooperative network

    NASA Astrophysics Data System (ADS)

    Gurrala, Kiran Kumar; Das, Susmita

    2015-05-01

    In this article, a signal-to-noise ratio (SNR)-based hybrid decode-amplify-forward scheme in a distributed Alamouti-coded cooperative network is proposed. Considering a flat Rayleigh fading channel environment, the MATLAB simulation and analysis are carried out. In the cooperative scheme, two relays are employed, where each relay is transmitting each row Alamouti code. The selection of SNR threshold depends on the target rate information. The closed form expressions of symbol error rate (SER), the outage probability and average channel capacity with tight upper bounds are derived and compared with the simulation done in MATLAB environment. Furthermore, the impact of relay location on the SER performance is analysed. It is observed that the proposed hybrid relaying technique outperforms the individual amplify and forward and decode and forward ones in the distributed Alamouti-coded cooperative network.

  4. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  5. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    NASA Astrophysics Data System (ADS)

    Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan

    2005-12-01

    Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.

  6. Data model description for the DESCARTES and CIDER codes

    SciTech Connect

    Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.

    1993-01-01

    The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes.

  7. Radiation transport phenomena and modeling - part A: Codes

    SciTech Connect

    Lorence, L.J.

    1997-06-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.

  8. Cost effectiveness of the 1995 model energy code in Massachusetts

    SciTech Connect

    Lucas, R.G.

    1996-02-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1995 Model Energy Code (MEC) building thermal-envelope requirements for single-family houses and multifamily housing units in Massachusetts. The goal was to compare the cost effectiveness of the 1995 MEC to the energy conservation requirements of the Massachusetts State Building Code-based on a comparison of the costs and benefits associated with complying with each.. This comparison was performed for three cities representing three geographical regions of Massachusetts--Boston, Worcester, and Pittsfield. The analysis was done for two different scenarios: a ``move-up`` home buyer purchasing a single-family house and a ``first-time`` financially limited home buyer purchasing a multifamily condominium unit. Natural gas, oil, and electric resistance heating were examined. The Massachusetts state code has much more stringent requirements if electric resistance heating is used rather than other heating fuels and/or equipment types. The MEC requirements do not vary by fuel type. For single-family homes, the 1995 MEC has requirements that are more energy-efficient than the non-electric resistance requirements of the current state code. For multifamily housing, the 1995 MEC has requirements that are approximately equally energy-efficient to the non-electric resistance requirements of the current state code. The 1995 MEC is generally not more stringent than the electric resistance requirements of the state code, in fact; for multifamily buildings the 1995 MEC is much less stringent.

  9. Subgraphs Matching-Based Side Information Generation for Distributed Multiview Video Coding

    NASA Astrophysics Data System (ADS)

    Xiong, Hongkai; Lv, Hui; Zhang, Yongsheng; Song, Li; He, Zhihai; Chen, Tsuhan

    2010-12-01

    We adopt constrained relaxation for distributed multiview video coding (DMVC). The novel framework integrates the graph-based segmentation and matching to generate interview correlated side information without knowing the camera parameters, inspired by subgraph semantics and sparse decomposition of high-dimensional scale invariant feature data. The sparse data as a good hypothesis space aim for a best matching optimization of interview side information with compact syndromes, from inferred relaxed coset. The plausible filling-in from a priori feature constraints between neighboring views could reinforce a promising compensation to interview side-information generation for joint multiview decoding. The graph-based representations of multiview images are adopted as constrained relaxation, which assists the interview correlation matching for subgraph semantics of the original Wyner-Ziv image by the graph-based image segmentation and the associated scale invariant feature detector MSER (maximally stable extremal regions) and descriptor SIFT (scale-invariant feature transform). In order to find a distinctive feature matching with a more stable approximation, linear (PCA-SIFT) and nonlinear projections (Locally linear embedding) are adopted to reduce the dimension SIFT descriptors, and TPS (thin plate spline) warping model is to catch a more accurate interview motion model. The experimental results validate the high-estimation precision and the rate-distortion improvements.

  10. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  11. NPARC Code Upgraded with Two-Equation Turbulence Models

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The National PARC (NPARC) Alliance was established by the NASA Lewis Research Center and the Air Force Arnold Engineering Development Center to provide the U.S. aeropropulsion community with a reliable Navier-Stokes code for simulating the nonrotating components of propulsion systems. Recent improvements to the turbulence model capabilities of the NPARC code have significantly improved its capability to simulate turbulent flows. Specifically, the Chien k-epsilon and Wilcox k-omega turbulence models were implemented at Lewis. Lewis researchers installed the Chien k-epsilon model into NPARC to improve the code's ability to calculate turbulent flows with attached wall boundary layers and free shear layers. Calculations with NPARC have demonstrated that the Chien k-epsilon model provides more accurate calculations than those obtained with algebraic models previously available in the code. Grid sensitivity investigations have shown that computational grids must be packed against the solid walls such that the first point off of the wall is placed in the laminar sublayer. In addition, matching the boundary layer and momentum thicknesses entering mixing regions is necessary for an accurate prediction of the free shear-layer growth.

  12. The lognormal and gamma distribution models for estimating molecular weight distributions of polymers using PGSE NMR

    NASA Astrophysics Data System (ADS)

    Williamson, Nathan H.; Nydén, Magnus; Röding, Magnus

    2016-06-01

    We present comprehensive derivations for the statistical models and methods for the use of pulsed gradient spin echo (PGSE) NMR to characterize the molecular weight distribution of polymers via the well-known scaling law relating diffusion coefficients and molecular weights. We cover the lognormal and gamma distribution models and linear combinations of these distributions. Although the focus is on methodology, we illustrate the use experimentally with three polystyrene samples, comparing the NMR results to gel permeation chromatography (GPC) measurements, test the accuracy and noise-sensitivity on simulated data, and provide code for implementation.

  13. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  14. Multisynaptic activity in a pyramidal neuron model and neural code.

    PubMed

    Ventriglia, Francesco; Di Maio, Vito

    2006-01-01

    The highly irregular firing of mammalian cortical pyramidal neurons is one of the most striking observation of the brain activity. This result affects greatly the discussion on the neural code, i.e. how the brain codes information transmitted along the different cortical stages. In fact it seems to be in favor of one of the two main hypotheses about this issue, named the rate code. But the supporters of the contrasting hypothesis, the temporal code, consider this evidence inconclusive. We discuss here a leaky integrate-and-fire model of a hippocampal pyramidal neuron intended to be biologically sound to investigate the genesis of the irregular pyramidal firing and to give useful information about the coding problem. To this aim, the complete set of excitatory and inhibitory synapses impinging on such a neuron has been taken into account. The firing activity of the neuron model has been studied by computer simulation both in basic conditions and allowing brief periods of over-stimulation in specific regions of its synaptic constellation. Our results show neuronal firing conditions similar to those observed in experimental investigations on pyramidal cortical neurons. In particular, the variation coefficient (CV) computed from the inter-spike intervals (ISIs) in our simulations for basic conditions is close to the unity as that computed from experimental data. Our simulation shows also different behaviors in firing sequences for different frequencies of stimulation. PMID:16870323

  15. Water Distribution and Removal Model

    SciTech Connect

    Y. Deng; N. Chipman; E.L. Hardin

    2005-08-26

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes

  16. General Description of Fission Observables: GEF Model Code

    NASA Astrophysics Data System (ADS)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  17. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each

  18. ABAREX: A neutron spherical optical-statistical model code

    SciTech Connect

    Lawson, R.D.

    1992-06-01

    The spherical optical-statistical model is briefly reviewed and the capabilities of the neutron scattering code, ABAREX, are presented. Input files for ten examples, in which neutrons are scattered by various nuclei, are given and the output of each run is discussed in detail.

  19. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  20. Self-shielding models of MICROX-2 code

    SciTech Connect

    Hou, J.; Ivanov, K.; Choi, H.

    2013-07-01

    The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. In the previous study, a new fine-group cross section library of the MICROX-2 was generated and tested against reference calculations and measurement data. In this study, existing physics models of the MICROX-2 are reviewed and updated to improve the physics calculation performance of the MICROX-2 code, including the resonance self-shielding model and spatial self-shielding factor. The updated self-shielding models have been verified through a series of benchmark calculations against the Monte Carlo code, using homogeneous and pin cell models selected for this study. The results have shown that the updates of the self-shielding factor calculation model are correct and improve the physics calculation accuracy even though the magnitude of error reduction is relatively small. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by approximately 0.1 % and 0.2% for the homogeneous and pin cell models, respectively, considered in this study. (authors)

  1. Modeling of Anomalous Transport in Tokamaks with FACETS code

    NASA Astrophysics Data System (ADS)

    Pankin, A. Y.; Batemann, G.; Kritz, A.; Rafiq, T.; Vadlamani, S.; Hakim, A.; Kruger, S.; Miah, M.; Rognlien, T.

    2009-05-01

    The FACETS code, a whole-device integrated modeling code that self-consistently computes plasma profiles for the plasma core and edge in tokamaks, has been recently developed as a part of the SciDAC project for core-edge simulations. A choice of transport models is available in FACETS through the FMCFM interface [1]. Transport models included in FMCFM have specific ranges of applicability, which can limit their use to parts of the plasma. In particular, the GLF23 transport model does not include the resistive ballooning effects that can be important in the tokamak pedestal region and GLF23 typically under-predicts the anomalous fluxes near the magnetic axis [2]. The TGLF and GYRO transport models have similar limitations [3]. A combination of transport models that covers the entire discharge domain is studied using FACETS in a realistic tokamak geometry. Effective diffusivities computed with the FMCFM transport models are extended to the region near the separatrix to be used in the UEDGE code within FACETS. 1. S. Vadlamani et al. (2009) %First time-dependent transport simulations using GYRO and NCLASS within FACETS (this meeting).2. T. Rafiq et al. (2009) %Simulation of electron thermal transport in H-mode discharges Submitted to Phys. Plasmas.3. C. Holland et al. (2008) %Validation of gyrokinetic transport simulations using %DIII-D core turbulence measurements Proc. of IAEA FEC (Switzerland, 2008)

  2. Modeling of the EAST ICRF antenna with ICANT Code

    SciTech Connect

    Qin Chengming; Zhao Yanping; Colas, L.; Heuraux, S.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  3. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.

  4. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  5. Thermohydraulic modeling of nuclear thermal rockets: The KLAXON code

    SciTech Connect

    Hall, M.L.; Rider, W.J.; Cappiello, M.W.

    1992-07-01

    The hydrogen flow from the storage tanks, through the reactor core, and out the nozzle of a Nuclear Thermal Rocket is an integral design consideration. To provide an analysis and design tool for this phenomenon, the KLAXON code is being developed. A shock-capturing numerical methodology is used to model the gas flow (the Harten, Lax, and van Leer method, as implemented by Einfeldt). Preliminary results of modeling the flow through the reactor core and nozzle are given in this paper.

  6. A semianalytic Monte Carlo code for modelling LIDAR measurements

    NASA Astrophysics Data System (ADS)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  7. Extending NEC to model wire objects in infinite chiral media. [Numerical electromagnetic code (NEC)

    SciTech Connect

    Burke, G.J. ); Miller, E.K. ); Bhattachryya, A.K. . Physical Science Lab.)

    1992-01-01

    The development of a moment-method model for wire objects in an infinite chiral medium is described. In this work, the Numerical Electromagnetics Code (NEC) was extended by including a new integral-equation kernel obtained from the dyadic Green's function for an infinite chiral medium. The NEC moment-method treatment using point matching and a three-term sinusoidal current expansion was adapted to the case of a chiral medium. Examples of current distributions and radiation patterns for simple antennas are presented, and the validation of the code is discussed. 15 refs.

  8. Exciton Model Code System for Calculating Preequilibrium and Direct Double Differential Cross Sections.

    2007-07-09

    Version 02 PRECO-2006 is a two-component exciton model code for the calculation of double differential cross sections of light particle nuclear reactions. PRECO calculates the emission of light particles (A = 1 to 4) from nuclear reactions induced by light particles on a wide variety of target nuclei. Their distribution in both energy and angle is calculated. Since it currently only considers the emission of up to two particles in any given reaction, it ismore » most useful for incident energies of 14 to 30 MeV when used as a stand-alone code. However, the preequilibrium calculations are valid up to at least around 100 MeV, and these can be used as input for more complete evaporation calculations, such as are performed in a Hauser-Feshbach model code. Finally, the production cross sections for specific product nuclides can be obtained« less

  9. Non-contact assessment of melanin distribution via multispectral temporal illumination coding

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.

    2015-03-01

    Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).

  10. Enhancements to the SSME transfer function modeling code

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  11. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN... May a tribe include provisions in its tribal probate code regarding the distribution and descent...

  12. Using cryptology models for protecting PHP source code

    NASA Astrophysics Data System (ADS)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  13. Examination of nanoparticle dispersion using a novel GPU based radial distribution function code

    NASA Astrophysics Data System (ADS)

    Rosch, Thomas; Wade, Matthew; Phelan, Frederick

    We have developed a novel GPU-based code that rapidly calculates radial distribution function (RDF) for an entire system, with no cutoff, ensuring accuracy. Built on top of this code, we have developed tools to calculate the second virial coefficient (B2) and the structure factor from the RDF, two properties that are directly related to the dispersion of nanoparticles in nancomposite systems. We validate the RDF calculations by comparison with previously published results, and also show how our code, which takes into account bonding in polymeric systems, enables more accurate predictions of g(r) than current state of the art GPU-based RDF codes currently available for these systems. In addition, our code reduces the computational time by approximately an order of magnitude compared to CPU-based calculations. We demonstrate the application of our toolset by the examination of a coarse-grained nanocomposite system and show how different surface energies between particle and polymer lead to different dispersion states, and effect properties such as viscosity, yield strength, elasticity, and thermal conductivity.

  14. The modelling of wall condensation with noncondensable gases for the containment codes

    SciTech Connect

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H.

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  15. Hierarchical model for distributed seismicity

    SciTech Connect

    Tejedor, Alejandro; Gomez, Javier B.; Pacheco, Amalio F.

    2010-07-15

    A cellular automata model for the interaction between seismic faults in an extended region is presented. Faults are represented by boxes formed by a different number of sites and located in the nodes of a fractal tree. Both the distribution of box sizes and the interaction between them is assumed to be hierarchical. Load particles are randomly added to the system, simulating the action of external tectonic forces. These particles fill the sites of the boxes progressively. When a box is full it topples, some of the particles are redistributed to other boxes and some of them are lost. A box relaxation simulates the occurrence of an earthquake in the region. The particle redistributions mostly occur upwards (to larger faults) and downwards (to smaller faults) in the hierarchy producing new relaxations. A simple and efficient bookkeeping of the information allows the running of systems with more than fifty million faults. This model is consistent with the definition of magnitude, i.e., earthquakes of magnitude m take place in boxes with a number of sites ten times bigger than those boxes responsible for earthquakes with a magnitude m-1 which are placed in the immediate lower level of the hierarchy. The three parameters of the model have a geometrical nature: the height or number of levels of the fractal tree, the coordination of the tree and the ratio of areas between boxes in two consecutive levels. Besides reproducing several seismicity properties and regularities, this model is used to test the performance of some precursory patterns.

  16. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.

  17. New Mechanical Model for the Transmutation Fuel Performance Code

    SciTech Connect

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  18. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-12-09

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{_}summary.html.

  19. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  20. Description of the FORTRAN implementation of the spring small grains planting date distribution model

    NASA Technical Reports Server (NTRS)

    Artley, J. A. (Principal Investigator)

    1981-01-01

    The Hodges-Artley spring small grains planting date distribution model was coded in FORTRAN. The PLDRVR program, which implements the model, is described and a copy of the code is provided. The purpose, calling procedure, local variables, and input/output devices for each subroutine are explained to supplement the user's guide.

  1. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    SciTech Connect

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  2. Spatial information outflow from the hippocampal circuit: distributed spatial coding and phase precession in the subiculum.

    PubMed

    Kim, Steve M; Ganguli, Surya; Frank, Loren M

    2012-08-22

    Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.

  3. Spatial information outflow from the hippocampal circuit: distributed spatial coding and phase precession in the subiculum

    PubMed Central

    Kim, Steve M.; Ganguli, Surya; Frank, Loren M.

    2012-01-01

    Hippocampal place cells convey spatial information through a combination of spatially-selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit. PMID:22915100

  4. Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter

    2015-09-01

    Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.

  5. Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.

    ERIC Educational Resources Information Center

    Dearholt, D. W.; Valdes-Fallis, G.

    1978-01-01

    The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)

  6. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  7. Direct containment heating models in the CONTAIN code

    SciTech Connect

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  8. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam; Sundararaghavan, Veera

    2015-06-01

    In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.

  9. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  10. Systematic effects in CALOR simulation code to model experimental configurations

    SciTech Connect

    Job, P.K.; Proudfoot, J. ); Handler, T. . Dept. of Physics and Astronomy); Gabriel, T.A. )

    1991-03-27

    CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, D{theta} and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs.

  11. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  12. Benchmarking of computer codes and approaches for modeling exposure scenarios

    SciTech Connect

    Seitz, R.R.; Rittmann, P.D.; Wood, M.I.; Cook, J.R.

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  13. Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes

    NASA Astrophysics Data System (ADS)

    Tavallaei, Saeed Ebadi; Falahati, Abolfazl

    Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.

  14. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  15. High-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Orgun, Mehmet A.; Pieprzyk, Josef; Li, Jing; Luo, Mingxing; Xiao, Jinghua; Xiao, Fuyuan

    2016-08-01

    We propose an approach that achieves high-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding. In particular, we encode a key with the Chebyshev-map values corresponding to Lucas numbers and then use k-Chebyshev maps to achieve consecutive and flexible key expansion and apply the pre-shared classical information between Alice and Bob and fountain codes for privacy amplification to solve the security of the exchange of classical information via the classical channel. Consequently, our high-capacity protocol does not have the limitations imposed by orbital angular momentum and down-conversion bandwidths, and it meets the requirements for longer distances and lower error rates simultaneously.

  16. External exposure model in the RESRAD computer code.

    SciTech Connect

    Kamboj, S.; Yu, C.; Environmental Assessment

    2002-06-01

    An external exposure model has been developed for the RESRAD computer code that provides flexibility in modeling soil contamination configurations for calculating external doses to exposed individuals. This model is based on the dose coefficients given in the U.S. Environmental Protection Agency's Federal Guidance Report No. 12 (FGR-12) and the point kernel method. It extends the applicability of FGR-12 data to include the effects of different source geometries, such as cover thickness, source thickness, source area, and shape of contaminated area of a specific site. A depth factor function was developed to express the dependence of the dose on the source thickness. A cover-and-depth factor function, derived from this depth factor function, takes into account the dependence of dose on the thickness of the source region and the thickness of the cover above the source region. To further extend the model for realistic geometries, area and shape factors were derived that depend not only on the lateral extent of the contamination, but also on source thickness, cover thickness, and radionuclides present. Results obtained with the model generally compare well with those from the Monte Carlo N-Particle transport code.

  17. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    SciTech Connect

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  18. LineCast: line-based distributed coding and transmission for broadcasting satellite images.

    PubMed

    Wu, Feng; Peng, Xiulian; Xu, Jizheng

    2014-03-01

    In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.

  19. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    PubMed

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  20. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    PubMed Central

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  1. Interim storage of spent and disused sealed sources: optimisation of external dose distribution in waste grids using the MCNPX code.

    PubMed

    Paiva, I; Oliveira, C; Trindade, R; Portugal, L

    2005-01-01

    Radioactive sealed sources are in use worldwide in different fields of application. When no further use is foreseen for these sources, they become spent or disused sealed sources and are subject to a specific waste management scheme. Portugal does have a Radioactive Waste Interim Storage Facility where spent or disused sealed sources are conditioned in a cement matrix inside concrete drums and following the geometrical disposition of a grid. The gamma dose values around each grid depend on the drum's enclosed activity and radionuclides considered, as well as on the drums distribution in the various layers of the grid. This work proposes a method based on the Monte Carlo simulation using the MCNPX code to estimate the best drum arrangement through the optimisation of dose distribution in a grid. Measured dose rate values at 1 m distance from the surface of the chosen optimised grid were used to validate the corresponding computational grid model. PMID:16604671

  2. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  3. Physics models in the toroidal transport code PROCTR

    SciTech Connect

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  4. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  5. Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding

    NASA Astrophysics Data System (ADS)

    Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying

    2016-02-01

    We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.

  6. The data redundancy method for distributed Storage based on erasure code

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Pan, Chao

    2015-12-01

    This paper presents a data redundancy method for distributed storage by applying Erasure code to storage system. The method involves some key technologies such as data read and written, failure detection and node redirection, and restoration algorithms. According to the theoretical analysis, this method can efficiently improve the use ratio of storage space as well as enhance reliability and availability for a storage system. Also, it can obtain the same availability of data at the cost of lower redundancy degree compared with many others storage methods. The quantitative analysis of this method's performance is also given in the paper.

  7. The Overlap Model: A Model of Letter Position Coding

    ERIC Educational Resources Information Center

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…

  8. Code System for the Analysis of Component Failure Data with a Compound Statistical Model.

    2000-08-22

    Version 00 Two separate but similar Fortran computer codes have been developed for the analysis of component failure data with a compound statistical model: SAFE-D and SAFE-R. The SAFE-D code (Statistical Analysis for Failure Estimation-failure-on-Demand) analyzes data which give the observed number of failures (failure to respond properly) in a specified number of demands for several similar components that should change their condition upon demand. The second program, SAFE-R (Statistical Analysis for Failure Estimation-failure Rate)more » is to be used to analyze normally operating components for which the observed number of failures in a specified operating time is given. In both these codes the failure parameter (failure probability per demand for SAFE-D or failure rate for SAFE-R) may be assumed equal for all similar components (the homogeneous failure model) or may be assumed to be a random variable distributed among similar components according to a prior distribution (the heterogeneous or compound failure model). Related information can be found at the developer's web site: http://www.mne.ksu.edu/~jks/.« less

  9. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  10. First Transport Code Simulations using the TGLF Model

    NASA Astrophysics Data System (ADS)

    Kinsey, J. E.

    2007-11-01

    The first transport code simulations using the newly developed TGLF theory-based transport model [1,2] are presented. TGLF has comprehensive physics to approximate the turbulent transport due to drift-ballooning modes in tokamaks. The TGLF model is a next generation gyro-Landau-fluid model that includes several recent advances that remove the limitations of its predecessor, GLF23. The model solves for the linear eigenmodes of trapped ion and electron modes (TIM, TEM), ion and electron temperature gradient (ITG, ETG) modes and finite beta kinetic ballooning (KB) modes in either shifted circle or shaped geometry [1]. A database of over 400 nonlinear GYRO gyrokinetic simulations has been created [3]. A subset of 140 simulations including Miller shaped geometry has been used to find a model for the saturation levels. Using a simple quasilinear (QL) saturation rule, we find remarkable agreement with the energy and particle fluxes from a wide variety of GYRO simulations for both shaped or circular geometry and also for low aspect ratio. Using this new QL saturation rule along with a new ExB shear quench rule for shaped geometry, we predict the density, temperature, and toroidal rotation profiles in a transport code and compare the results against experimental data in the ITPA Profile Database. We examine the impact of the improved electron physics in the model and the role of elongation and triangularity on the predicted profiles and compare to the results previously obtained using the GLF23 model. [1] G.M. Staebler, J.E. Kinsey, and R.E. Waltz, Phys. Plasmas 12, 102508 (2005). [2] G.M. Staebler, J.E. Kinsey, and R.E. Waltz, to appear in Phys. Plasmas, May(2007). [3] The GYRO database is documented at fusion.gat.com/theory/gyro.

  11. Comprehensive Nuclear Model Code, Nucleons, Ions, Induced Cross-Sections

    2002-09-27

    EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined op0tical, Multistep Direct (TUL), Multistep Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus (Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha- particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions andmore » extends up to several hundreds of MeV for the Heavy Ion induced reactions.« less

  12. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  13. The implementation of an aeronautical CFD flow code onto distributed memory parallel systems

    NASA Astrophysics Data System (ADS)

    Ierotheou, C. S.; Forsey, C. R.; Leatham, M.

    2000-04-01

    The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright

  14. A novel method involving Matlab coding to determine the distribution of a collimated ionizing radiation beam

    NASA Astrophysics Data System (ADS)

    Ioan, M.-R.

    2016-08-01

    In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.

  15. A numerical code for a three-dimensional magnetospheric MHD equilibrium model

    NASA Technical Reports Server (NTRS)

    Voigt, G.-H.

    1992-01-01

    Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.

  16. Comparison of different methods used in integral codes to model coagulation of aerosols

    NASA Astrophysics Data System (ADS)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  17. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.

  18. Sequence Prediction With Sparse Distributed Hyperdimensional Coding Applied to the Analysis of Mobile Phone Use Patterns.

    PubMed

    Rasanen, Okko J; Saarinen, Jukka P

    2016-09-01

    Modeling and prediction of temporal sequences is central to many signal processing and machine learning applications. Prediction based on sequence history is typically performed using parametric models, such as fixed-order Markov chains ( n -grams), approximations of high-order Markov processes, such as mixed-order Markov models or mixtures of lagged bigram models, or with other machine learning techniques. This paper presents a method for sequence prediction based on sparse hyperdimensional coding of the sequence structure and describes how higher order temporal structures can be utilized in sparse coding in a balanced manner. The method is purely incremental, allowing real-time online learning and prediction with limited computational resources. Experiments with prediction of mobile phone use patterns, including the prediction of the next launched application, the next GPS location of the user, and the next artist played with the phone media player, reveal that the proposed method is able to capture the relevant variable-order structure from the sequences. In comparison with the n -grams and the mixed-order Markov models, the sparse hyperdimensional predictor clearly outperforms its peers in terms of unweighted average recall and achieves an equal level of weighted average recall as the mixed-order Markov chain but without the batch training of the mixed-order model.

  19. Kinetic models of gene expression including non-coding RNAs

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2011-03-01

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  20. Physicochemical analog for modeling superimposed and coded memories

    NASA Astrophysics Data System (ADS)

    Ensanian, Minas

    1992-07-01

    The mammalian brain is distinguished by a life-time of memories being stored within the same general region of physicochemical space, and having two extraordinary features. First, memories to varying degrees are superimposed, as well as coded. Second, instantaneous recall of past events can often be affected by relatively simple, and seemingly unrelated sensory clues. For the purposes of attempting to mathematically model such complex behavior, and for gaining additional insights, it would be highly advantageous to be able to simulate or mimic similar behavior in a nonbiological entity where some analogical parameters of interest can reasonably be controlled. It has recently been discovered that in nonlinear accumulative metal fatigue memories (related to mechanical deformation) can be superimposed and coded in the crystal lattice, and that memory, that is, the total number of stress cycles can be recalled (determined) by scanning not the surfaces but the `edges' of the objects. The new scanning technique known as electrotopography (ETG) now makes the state space modeling of metallic networks possible. The author provides an overview of the new field and outlines the areas that are of immediate interest to the science of artificial neural networks.

  1. New high burnup fuel models for NRC`s licensing audit code, FRAPCON

    SciTech Connect

    Lanning, D.D.; Beyer, C.E.; Painter, C.L.

    1996-03-01

    Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data.

  2. Modelling Documents with Multiple Poisson Distributions.

    ERIC Educational Resources Information Center

    Margulis, Eugene L.

    1993-01-01

    Reports on the validity of the Multiple Poisson (nP) model of word distribution in full-text document collections. A practical algorithm for determining whether a certain word is distributed according to an nP distribution and the results of a test of this algorithm in three different document collections are described. (14 references) (KRN)

  3. Modeling Constituent Redistribution in U-Pu-Zr Metallic Fuel Using the Advanced Fuel Performance Code BISON

    SciTech Connect

    Douglas Porter; Steve Hayes; Various

    2014-06-01

    The Advanced Fuels Campaign (AFC) metallic fuels currently being tested have higher zirconium and plutonium concentrations than those tested in the past in EBR reactors. Current metal fuel performance codes have limitations and deficiencies in predicting AFC fuel performance, particularly in the modeling of constituent distribution. No fully validated code exists due to sparse data and unknown modeling parameters. Our primary objective is to develop an initial analysis tool by incorporating state-of-the-art knowledge, constitutive models and properties of AFC metal fuels into the MOOSE/BISON (1) framework in order to analyze AFC metallic fuel tests.

  4. Surface and aerosol models for use in radiative transfer codes

    NASA Astrophysics Data System (ADS)

    Hart, Quinn J.

    1991-08-01

    Absolute reflectance-based radiometric calibrations of Landsat-5 Thematic Mapper (TM) are improved with the inclusion of a method to invert optical-depth measurements to obtain aerosol-particle size distributions, and a non-Lambertian surface reflectance model. The inverted size distributions can predict radiances varying from the previously assumed jungian distributions by as much as 5 percent, though the reduction in the estimated error is less than one percent. Comparison with measured diffuse-to-global ratios show that neither distribution consistently predicts the ratio accurately, and this is shown to be a large contributor to calibration uncertainties. An empirical model for the surface reflectance of White Sands, using a two-degree polynomial fit as a function of scattering angle, was employed. The model reduced estimated errors in radiance predictions by up to one percent. Satellite calibrations dating from October, 1984 were reprocessed using the improved methods and linear estimations of satellite counts per unit radiance versus time since launch were determined which showed a decrease over time for the first four bands.

  5. A new computer code for discrete fracture network modelling

    NASA Astrophysics Data System (ADS)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  6. New TVD Hydro Code for Modeling Disk-Planet Interactions

    NASA Astrophysics Data System (ADS)

    Mudryk, Lawrence; Murray, Norman

    2004-06-01

    We present test simulations of a TVD hydrodynamical code designed with very few calculations per time step. The code is to be used to preform simulations of proto-planet interactions within gas disks in early solar systems.

  7. Distance distribution in configuration-model networks

    NASA Astrophysics Data System (ADS)

    Nitzan, Mor; Katzav, Eytan; Kühn, Reimer; Biham, Ofer

    2016-06-01

    We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial, and power-law degree distributions. The mean, mode, and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions.

  8. Distance distribution in configuration-model networks.

    PubMed

    Nitzan, Mor; Katzav, Eytan; Kühn, Reimer; Biham, Ofer

    2016-06-01

    We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial, and power-law degree distributions. The mean, mode, and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions. PMID:27415282

  9. Torus mapper: a code for dynamical models of galaxies

    NASA Astrophysics Data System (ADS)

    Binney, James; McMillan, Paul J.

    2016-02-01

    We present a freely downloadable software package for modelling the dynamics of galaxies, which we call the Torus Mapper (TM). The package is based around `torus mapping', which is a non-perturbative technique for creating orbital tori for specified values of the action integrals. Given an orbital torus and a star's position at a reference time, one can compute its position at any other time, no matter how remote. One can also compute the velocities with which the star will pass through any given point and the contribution it will make to the time-averaged density there. A system of angle-action coordinates for the given potential can be created by foliating phase space with orbital tori. Such a foliation is facilitated by the ability of TM to create tori by interpolating on a grid of tori. We summarize the advantages of using TM rather than a standard time-stepper to create orbits, and give segments of code that illustrate applications of TM in several contexts, including setting up initial conditions for an N-body simulation. We examine the precision of the orbital tori created by TM and the behaviour of the code when orbits become trapped by a resonance.

  10. On-the-fly generation of differential resonance scattering probability distribution functions for Monte Carlo codes

    SciTech Connect

    Sunny, E. E.; Martin, W. R.

    2013-07-01

    Current Monte Carlo codes use one of three models to model neutron scattering in the epithermal energy range: (1) the asymptotic scattering model, (2) the free gas scattering model, or (3) the S({alpha},{beta}) model, depending on the neutron energy and the specific Monte Carlo code. The free gas scattering model assumes the scattering cross section is constant over the neutron energy range, which is usually a good approximation for light nuclei, but not for heavy nuclei where the scattering cross section may have several resonances in the epithermal region. Several researchers in the field have shown that using the free gas scattering model in the vicinity of the resonances in the lower epithermal range can under-predict resonance absorption due to the up-scattering phenomenon. Existing methods all involve performing the collision analysis in the center-of-mass frame, followed by a conversion back to the laboratory frame. In this paper, we will present a new sampling methodology that (1) accounts for the energy-dependent scattering cross sections in the collision analysis and (2) acts in the laboratory frame, avoiding the conversion to the center-of-mass frame. The energy dependence of the scattering cross section was modeled with even-ordered polynomials to approximate the scattering cross section in Blackshaw's equations for the moments of the differential scattering PDFs. These moments were used to sample the outgoing neutron speed and angle in the laboratory frame on-the-fly during the random walk of the neutron. Results for criticality studies on fuel pin and fuel assembly calculations using these methods showed very close comparison to results using the reference Doppler-broadened rejection correction (DBRC) scheme. (authors)

  11. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  12. CODE's new solar radiation pressure model for GNSS orbit determination

    NASA Astrophysics Data System (ADS)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  13. Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.

  14. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  15. Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.

    SciTech Connect

    Blann, Marshall

    2014-11-01

    Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometry Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.

  16. Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.

    2014-11-01

    Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometrymore » Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.« less

  17. Modelling RF sources using 2-D PIC codes

    SciTech Connect

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  18. Modelling RF sources using 2-D PIC codes

    SciTech Connect

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  19. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  20. Modelling Radiative Stellar Winds with the SIMECA Code

    NASA Astrophysics Data System (ADS)

    Stee, Ph.

    Using the SIMECA code developped by Stee & Araùjo ([CITE]), we report theoretical HI visible and near-IR line profiles, i.e. Hα (6562 Å), Hβ (4861 Å) and Brγ (21 656 Å), and intensity maps for a large set of parameters representative of early to late Be spectral types. We have computed the size of the emitting region in the Brγ line and its nearby continuum which both originate from a very extended region, i.e. at least 40 stellar radii which is twice the size of the Hα emitting region. We predict the relative fluxes from the central star, the envelope contribution in the given lines and in the continuum for a wide range of parameters characterizing the disk models. Finally, we have also studied the effect of changing the spectral type on our results and we obtain a clear correlation between the luminosity in Hα and in the infrared.

  1. Thermohydraulic modeling of the nuclear thermal rocket: The KLAXON code

    SciTech Connect

    Hall, M.L.; Rider, W.J.; Cappiello, M.W. )

    1992-01-01

    Nuclear thermal rockets (NTRs) have been proposed as a means of propulsion for the Space Exploration Initiative (SEI, the manned mission to Mars). The NTR derives its thrust from the expulsion of hot supersonic hydrogen gas. A large tank on the rocket stores hydrogen in liquid or slush form, which is pumped by a turbopump through a nuclear reactor to provide the necessary heat. The path that the hydrogen takes is most circuitous, making several passes through the reactor and the nozzle itself (to provide cooling), as well as two passes through the turbopump (to transfer momentum). The proposed fuel elements for the reactor have two different configurations: solid prismatic fuel and particle-bed fuel. There are different design concerns for the two types of fuel, but there are also many fluid flow aspects that they share. The KLAXON code was used to model a generic NTR design from the inlet of the reactor core to the exit from the nozzle.

  2. Marked renewal model of smoothed VBR MPEG coded traffic

    NASA Astrophysics Data System (ADS)

    Hui, Xiaoshi; Li, Jiaoyang; Liu, Xiande

    1998-08-01

    In this paper, a method of smoothing variable bit-rate (VBR) MPEG traffic is proposed. A buffer, which has capacity over the peak bandwidth of group of picture (GOP) sequence of an MPEG traffic and which output rate is controlled by the distribution of GOP sequence, is connected to a source. The degree of burst of output stream from the buffer is deceased, and the stream's autocorrelation function characterizes non-increased and non-convex property. For smoothed MPEG traffic stream, the GOP sequence is the element target source traffic using for modeling. We applied a marked renewal process to model the GOP smoothed VBR MPEG traffics. The numerical study of simulating target VBR MPEG video source with a marked renewal model shows that not only the model's bandwidth distribution can match accurately that of target source sequence, but also its leading autocorrelation can approximate the long-range dependence of a VBR MPEG traffic as well as the short-range dependence. In addition to that, the model's parameters estimation is very easy. We conclude that GOP smoothed VBR MPEG video traffic could be not only transferred more efficiently but also analyzed perfectly with a marked renewal traffic model.

  3. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding

    PubMed Central

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550

  4. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    SciTech Connect

    Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.

  5. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  6. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  7. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  8. Graphical Models via Univariate Exponential Family Distributions

    PubMed Central

    Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong

    2016-01-01

    Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498

  9. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  10. Modeling Fluid Instabilities in Inertial Confinement Fusion Hydrodynamics Codes

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven

    2004-11-01

    When attempting to numerically model a physical phenomenon of any kind, we typically formulate the numerical requirements in terms of the range of spatial and temporal scales of interest. We then construct numerical software that adequately resolves those scales in each of the spatial and temporal dimensions. This software may use adaptive mesh refinement or other techniques to adequately resolve those scales of interest, and may use front-capturing algorithms or other techniques to avoid having to resolve scales that are not of interest to us. Knowing what constitutes the scales of interest is sometimes a difficult question. Harder still is knowing what constitutes adequate resolution. For many physical phenomena, adequate resolution may be obtained, for example, by simply demanding that the spatial and temporal derivatives of all scales of interest have errors less than some specified tolerance. But for other phenomena, in particular those in which physical instabilities are active, one must be much more precise in the specification of adequate resolution. In such situations one must ask detailed questions about the nature of the numerical errors, not just their size. The problem we have in mind is that of accurately modeling the evolution of small amplitude perturbations to a time-dependent flow, where the unperturbed flow itself exhibits large amplitude temporal and spatial variations. Any errors that we make in numerically modeling the unperturbed flow, if they have a projection onto the space of the perturbations of interest, can easily compromise the accuracy of those perturbations, even if the errors are small in terms of the unperturbed solution. Here we will discuss the progress that we have made over the past year in attempting to improve the ability of our radiation hydrodynamics code FASTRAD3D to accurately model the evolution of small-amplitude perturbations to an imploding ICF pellet, which is subject to both Richtmyer-Meshkov and Rayleigh

  11. PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations

    NASA Astrophysics Data System (ADS)

    Elmaghraby, Elsayed K.

    2009-09-01

    The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon

  12. Modeling of MHD edge containment in strip casting with ELEKTRA and CaPS-EM codes

    SciTech Connect

    Chang, F. C.

    2000-01-12

    This paper presents modeling studies of magnetohydrodynamics analysis in twin-roll casting. Argonne National Laboratory (ANL) and ISPAT Inland Inc. (Inland), formerly Inland Steel Co., have worked together to develop a three-dimensional (3-D) computer model that can predict eddy currents, fluid flows, and liquid metal containment of an electromagnetic (EM) edge containment device. The model was verified by comparing predictions with experimental results of liquid metal containment and fluid flow in EM edge dams (EMDs) that were designed at Inland for twin-roll casting. This mathematical model can significantly shorten casting research on the use of EM fields for liquid metal containment and control. The model can optimize the EMD design so it is suitable for application, and minimize expensive time-consuming full-scale testing. Numerical simulation was performed by coupling a 3-D finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve heat transfer, fluid flow, and turbulence transport in a casting process that involves EM fields. ELEKTRA can predict the eddy-current distribution and the EM forces in complex geometries. CaPS-EM can model fluid flows with free surfaces. The computed 3-D magnetic fields and induced eddy currents in ELEKTRA are used as input to temperature- and flow-field computations in CaPS-EM. Results of the numerical simulation compared well with measurements obtained from both static and dynamic tests.

  13. Mutation-selection models of coding sequence evolution with site-heterogeneous amino acid fitness profiles.

    PubMed

    Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas

    2010-03-01

    Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949

  14. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  15. Evaluation of turbulence models in the PARC code for transonic diffuser flows

    NASA Technical Reports Server (NTRS)

    Georgiadis, N. J.; Drummond, J. E.; Leonard, B. P.

    1994-01-01

    Flows through a transonic diffuser were investigated with the PARC code using five turbulence models to determine the effects of turbulence model selection on flow prediction. Three of the turbulence models were algebraic models: Thomas (the standard algebraic turbulence model in PARC), Baldwin-Lomax, and Modified Mixing Length-Thomas (MMLT). The other two models were the low Reynolds number k-epsilon models of Chien and Speziale. Three diffuser flows, referred to as the no-shock, weak-shock, and strong-shock cases, were calculated with each model to conduct the evaluation. Pressure distributions, velocity profiles, locations of shocks, and maximum Mach numbers in the duct were the flow quantities compared. Overall, the Chien k-epsilon model was the most accurate of the five models when considering results obtained for all three cases. However, the MMLT model provided solutions as accurate as the Chien model for the no-shock and the weak-shock cases, at a substantially lower computational cost (measured in CPU time required to obtain converged solutions). The strong shock flow, which included a region of shock-induced flow separation, was only predicted well by the two k-epsilon models.

  16. New trends in species distribution modelling

    USGS Publications Warehouse

    Zimmermann, Niklaus E.; Edwards, Thomas C.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian

    2010-01-01

    Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.

  17. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  18. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the

  19. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    SciTech Connect

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  20. Caveats for correlative species distribution modeling

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Stohlgren, Thomas J.; Kumar, Sunil; Morisette, Jeffrey T.; Holcombe, Tracy R.

    2015-01-01

    Correlative species distribution models are becoming commonplace in the scientific literature and public outreach products, displaying locations, abundance, or suitable environmental conditions for harmful invasive species, threatened and endangered species, or species of special concern. Accurate species distribution models are useful for efficient and adaptive management and conservation, research, and ecological forecasting. Yet, these models are often presented without fully examining or explaining the caveats for their proper use and interpretation and are often implemented without understanding the limitations and assumptions of the model being used. We describe common pitfalls, assumptions, and caveats of correlative species distribution models to help novice users and end users better interpret these models. Four primary caveats corresponding to different phases of the modeling process, each with supporting documentation and examples, include: (1) all sampling data are incomplete and potentially biased; (2) predictor variables must capture distribution constraints; (3) no single model works best for all species, in all areas, at all spatial scales, and over time; and (4) the results of species distribution models should be treated like a hypothesis to be tested and validated with additional sampling and modeling in an iterative process.

  1. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  2. Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization

    NASA Astrophysics Data System (ADS)

    Luo, Chuanfu; Sommer, Jens-Uwe

    2009-08-01

    We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program

  3. Environmental PCR survey to determine the distribution of a non-canonical genetic code in uncultivable oxymonads.

    PubMed

    de Koning, Audrey P; Noble, Geoffrey P; Heiss, Aaron A; Wong, Jensen; Keeling, Patrick J

    2008-01-01

    The universal genetic code is conserved throughout most living systems, but a non-canonical code where TAA and TAG encode glutamine has evolved in several eukaryotes, including oxymonad protists. Most oxymonads are uncultivable, so environmental RT-PCR and PCR was used to examine the distribution of this rare character. A total of 253 unique isolates of four protein-coding genes were sampled from the hindgut community of the cockroach, Cryptocercus punctulatus, an environment rich in diversity from two of the five subgroups of oxymonad, saccinobaculids and polymastigids. Four alpha-tubulins were found with non-canonical glutamine codons. Environmental RACE confirmed that these and related genes used only TGA as stop codons, as expected for the non-canonical code, whereas other genes used TAA or TAG as stop codons, as expected for the universal code. We characterized alpha-tubulin from manually isolated Saccinobaculus ambloaxostylus, confirming it uses the universal code and suggesting, by elimination, that the non-canonical code is used by a polymastigid. HSP90 and EF-1alpha phylogenies also showed environmental sequences falling into two distinct groups, and are generally consistent with previous hypotheses that polymastigids and Streblomastix are closely related. Overall, we propose that the non-canonical genetic code arose once in a common ancestor of Streblomastix and a subgroup of polymastigids. PMID:18211267

  4. Environmental PCR survey to determine the distribution of a non-canonical genetic code in uncultivable oxymonads.

    PubMed

    de Koning, Audrey P; Noble, Geoffrey P; Heiss, Aaron A; Wong, Jensen; Keeling, Patrick J

    2008-01-01

    The universal genetic code is conserved throughout most living systems, but a non-canonical code where TAA and TAG encode glutamine has evolved in several eukaryotes, including oxymonad protists. Most oxymonads are uncultivable, so environmental RT-PCR and PCR was used to examine the distribution of this rare character. A total of 253 unique isolates of four protein-coding genes were sampled from the hindgut community of the cockroach, Cryptocercus punctulatus, an environment rich in diversity from two of the five subgroups of oxymonad, saccinobaculids and polymastigids. Four alpha-tubulins were found with non-canonical glutamine codons. Environmental RACE confirmed that these and related genes used only TGA as stop codons, as expected for the non-canonical code, whereas other genes used TAA or TAG as stop codons, as expected for the universal code. We characterized alpha-tubulin from manually isolated Saccinobaculus ambloaxostylus, confirming it uses the universal code and suggesting, by elimination, that the non-canonical code is used by a polymastigid. HSP90 and EF-1alpha phylogenies also showed environmental sequences falling into two distinct groups, and are generally consistent with previous hypotheses that polymastigids and Streblomastix are closely related. Overall, we propose that the non-canonical genetic code arose once in a common ancestor of Streblomastix and a subgroup of polymastigids.

  5. Modeling Gas Distribution in Protoplanetary Accretion Disks

    NASA Astrophysics Data System (ADS)

    Kronberg, Martin; Lewis, Josiah; Brittain, Sean

    2010-07-01

    Protoplanetary accretion disks are disks of dust and gas which surround and feed material onto a forming star in the earliest stages of its evolution. One of the most useful methods for studying these disks is near infrared spectroscopy of rovibrational CO emission. This paper presents the methods in which synthetically generated spectra are modeled and fit to spectral data gathered from protoplanetary disks. This paper also discussed the methods in which this code can be improved by modifying the code to run a Monte Carlo analysis of best fit across the CONDOR cluster at Clemson University, thereby allowing for the creation of a catalog of protoplanetary disks with detailed information about them as gathered from the model.

  6. A Search for Core Values: Towards a Model Code of Ethics for Information Professionals.

    ERIC Educational Resources Information Center

    Koehler, Wallace C.; Pemberton, J. Michael

    2000-01-01

    Examines ethical codes and standards of professional practice promulgated by diverse associations of information professionals from varied national outlooks to identify a core set of ethical principles. Offers a model code based on a textual consensus of those ethical codes and standards examined. Three appendices provide information on…

  7. A simple way to model nebulae with distributed ionizing stars

    NASA Astrophysics Data System (ADS)

    Jamet, L.; Morisset, C.

    2008-04-01

    Aims: This work is a follow-up of a recent article by Ercolano et al. that shows that, in some cases, the spatial dispersion of the ionizing stars in a given nebula may significantly affect its emission spectrum. The authors found that the dispersion of the ionizing stars is accompanied by a decrease in the ionization parameter, which at least partly explains the variations in the nebular spectrum. However, they did not research how other effects associated to the dispersion of the stars may contribute to those variations. Furthermore, they made use of a unique and simplified set of stellar populations. The scope of the present article is to assess whether the variation in the ionization parameter is the dominant effect in the dependence of the nebular spectrum on the distribution of its ionizing stars. We examined this possibility for various regimes of metallicity and age. We also investigated a way to model the distribution of the ionizing sources so as to bypass expensive calculations. Methods: We wrote a code able to generate random stellar populations and to compute the emission spectra of their associated nebulae through the widespread photoionization code cloudy. This code can process two kinds of spatial distributions of the stars: one where all the stars are concentrated at one point, and one where their separation is such that their Strömgren spheres do not overlap. Results: We found that, in most regimes of stellar population ages and gas metallicities, the dependence of the ionization parameter on the distribution of the stars is the dominant factor in the variation of the main nebular diagnostics with this distribution. We derived a method to mimic those effects with a single calculation that makes use of the common assumptions of a central source and a spherical nebula, in the case of constant density objects. This represents a computation time saving by a factor of at least several dozen in the case of H ii regions ionized by massive clusters.

  8. Indiana Distributive Education Competency Based Model.

    ERIC Educational Resources Information Center

    Davis, Rod; And Others

    This Indiana distributive education competency-based curriculum model is designed to help teachers and local administrators plan and conduct a comprehensive marketing and distributive education program. It is divided into three levels--one level for each year of a three-year program. The competencies common to a variety of marketing and…

  9. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  10. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  11. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  12. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  13. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... each standard code and the phrase “or fire retardant treated wood” in reference note (a) of table 600... Part I—Administrative, and the reference to fire retardant treated plywood in section 2504(c)3 and...

  14. Incorporating uncertainty in predictive species distribution modelling

    PubMed Central

    Beale, Colin M.; Lennon, Jack J.

    2012-01-01

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387

  15. Applications of Transport/Reaction Codes to Problems in Cell Modeling

    SciTech Connect

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-11-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

  16. Modelling the Galactic distribution of free electrons

    NASA Astrophysics Data System (ADS)

    Schnitzeler, D. H. F. M.

    2012-11-01

    An accurate picture of how free electrons are distributed throughout the Milky Way leads to more reliable distances for pulsars and more accurate maps of the magnetic field distribution in the Milky Way. In this paper we test eight models of the free electron distribution in the Milky Way that have been published previously, and we introduce four additional models that explore the parameter space of possible models further. These new models consist of a simple exponential thick-disc model, and updated versions of the models by Taylor & Cordes and Cordes & Lazio with more extended thick discs. The final model we introduce uses the observed Hα intensity as a proxy for the total electron column density, also known as the dispersion measure (DM). Since accurate maps of Hα intensity are now available, this final model can in theory outperform the other models. We use the latest available data sets of pulsars with accurate distances (through parallax measurements or association with globular clusters) to optimize the parameters in these models. In the process of fitting a new scale height for the thick disc in the model by Cordes & Lazio, we discuss why this thick disc cannot be replaced by the thick disc that Gaensler et al. advocated in a recent paper. In the second part of our paper we test how well the different models can predict the DMs of these pulsars at known distances. We base our test on the ratios between the modelled and observed DMs, rather than on absolute deviations, and we identify systematic deviations between the modelled and observed DMs for the different models. For almost all models the ratio between the predicted and the observed DM cannot be described very well by a Gaussian distribution. We therefore calculate the deviations N between the modelled and observed DMs instead, and compare the cumulative distributions of N for the different models. Almost all models perform well, in that they predict DMs within a factor of 1.5-2 of the observed DMs

  17. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    SciTech Connect

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  18. Applying various algorithms for species distribution modelling.

    PubMed

    Li, Xinhai; Wang, Yuan

    2013-06-01

    Species distribution models have been used extensively in many fields, including climate change biology, landscape ecology and conservation biology. In the past 3 decades, a number of new models have been proposed, yet researchers still find it difficult to select appropriate models for data and objectives. In this review, we aim to provide insight into the prevailing species distribution models for newcomers in the field of modelling. We compared 11 popular models, including regression models (the generalized linear model, the generalized additive model, the multivariate adaptive regression splines model and hierarchical modelling), classification models (mixture discriminant analysis, the generalized boosting model, and classification and regression tree analysis) and complex models (artificial neural network, random forest, genetic algorithm for rule set production and maximum entropy approaches). Our objectives are: (i) to compare the strengths and weaknesses of the models, their characteristics and identify suitable situations for their use (in terms of data type and species-environment relationships) and (ii) to provide guidelines for model application, including 3 steps: model selection, model formulation and parameter estimation. PMID:23731809

  19. Applying various algorithms for species distribution modelling.

    PubMed

    Li, Xinhai; Wang, Yuan

    2013-06-01

    Species distribution models have been used extensively in many fields, including climate change biology, landscape ecology and conservation biology. In the past 3 decades, a number of new models have been proposed, yet researchers still find it difficult to select appropriate models for data and objectives. In this review, we aim to provide insight into the prevailing species distribution models for newcomers in the field of modelling. We compared 11 popular models, including regression models (the generalized linear model, the generalized additive model, the multivariate adaptive regression splines model and hierarchical modelling), classification models (mixture discriminant analysis, the generalized boosting model, and classification and regression tree analysis) and complex models (artificial neural network, random forest, genetic algorithm for rule set production and maximum entropy approaches). Our objectives are: (i) to compare the strengths and weaknesses of the models, their characteristics and identify suitable situations for their use (in terms of data type and species-environment relationships) and (ii) to provide guidelines for model application, including 3 steps: model selection, model formulation and parameter estimation.

  20. Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code

    NASA Astrophysics Data System (ADS)

    Robson Rocha, Will; Pilling, Sergio

    The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices

  1. Evolution and models for skewed parton distribution

    SciTech Connect

    Musatov, I.C.; Radyushkin, A.V.

    1999-05-17

    The authors discuss the structure of the ''forward visible'' (FW) parts of double and skewed distributions related to usual distributions through reduction relations. They use factorized models for double distributions (DDs) {tilde f}(x,{alpha}) in which one factor coincides with the usual (forward) parton distribution and another specifies the profile characterizing the spread of the longitudinal momentum transfer. The model DDs are used to construct skewed parton distributions (SPDs). For small skewedness, the FW parts of SPDs H ({tilde x},{xi}) can be obtained by averaging forward parton densities f({tilde x}-{xi}{alpha}) with the weight {rho}({alpha}) coinciding with the profile function of the double distribution {tilde f}(x, {alpha}) at small x. They show that if the x{sup n} moments {tilde f}{sub n}({alpha}) of DDs have the asymptotic (1-{alpha}{sup 2}){sup n+1} profile, then the {alpha}-profile of {tilde f}(x,{alpha}) for small x is completely determined by small-x behavior of the usual parton distribution. They demonstrate that, for small {xi}, the model with asymptotic profiles for {tilde f}{sub n}({alpha}) is equivalent to that proposed recently by Shuvaev et al., in which the Gegenbauer moments of SPDs do not depend on {xi}. They perform a numerical investigation of the evolution patterns of SPDs and give interpretation of the results of these studies within the formalism of double distributions.

  2. Fast-coding robust motion estimation model in a GPU

    NASA Astrophysics Data System (ADS)

    García, Carlos; Botella, Guillermo; de Sande, Francisco; Prieto-Matias, Manuel

    2015-02-01

    Nowadays vision systems are used with countless purposes. Moreover, the motion estimation is a discipline that allow to extract relevant information as pattern segmentation, 3D structure or tracking objects. However, the real-time requirements in most applications has limited its consolidation, considering the adoption of high performance systems to meet response times. With the emergence of so-called highly parallel devices known as accelerators this gap has narrowed. Two extreme endpoints in the spectrum of most common accelerators are Field Programmable Gate Array (FPGA) and Graphics Processing Systems (GPU), which usually offer higher performance rates than general propose processors. Moreover, the use of GPUs as accelerators involves the efficient exploitation of any parallelism in the target application. This task is not easy because performance rates are affected by many aspects that programmers should overcome. In this paper, we evaluate OpenACC standard, a programming model with directives which favors porting any code to a GPU in the context of motion estimation application. The results confirm that this programming paradigm is suitable for this image processing applications achieving a very satisfactory acceleration in convolution based problems as in the well-known Lucas & Kanade method.

  3. Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber

    NASA Astrophysics Data System (ADS)

    Yuen, A.; Bombardelli, F. A.

    2014-12-01

    Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on

  4. Challenges and perspectives for species distribution modelling in the neotropics.

    PubMed

    Kamino, Luciana H Y; Stehmann, João Renato; Amaral, Silvana; De Marco, Paulo; Rangel, Thiago F; de Siqueira, Marinez F; De Giovanni, Renato; Hortal, Joaquín

    2012-06-23

    The workshop 'Species distribution models: applications, challenges and perspectives' held at Belo Horizonte (Brazil), 29-30 August 2011, aimed to review the state-of-the-art in species distribution modelling (SDM) in the neotropical realm. It brought together researchers in ecology, evolution, biogeography and conservation, with different backgrounds and research interests. The application of SDM in the megadiverse neotropics-where data on species occurrences are scarce-presents several challenges, involving acknowledging the limitations imposed by data quality, including surveys as an integral part of SDM studies, and designing the analyses in accordance with the question investigated. Specific solutions were discussed, and a code of good practice in SDM studies and related field surveys was drafted.

  5. A dynamic p53-mdm2 model with distributed delay

    NASA Astrophysics Data System (ADS)

    Horhat, Raluca; Horhat, Raul Florin

    2014-12-01

    Specific activator and repressor transcription factors which bind to specific regulator DNA sequences, play an important role in gene activity control. Interactions between genes coding such transcripion factors should explain the different stable or sometimes oscillatory gene activities characteristic for different tissues. In this paper, the dynamic P53-Mdm2 interaction model with distributed delays is investigated. Both weak and Dirac kernels are taken into consideration. For Dirac case, the Hopf bifurcation is investigated. Some numerical examples are finally given for justifying the theoretical results.

  6. Population distribution models: species distributions are better modeled using biologically relevant data partitions

    PubMed Central

    2011-01-01

    Background Predicting the geographic distribution of widespread species through modeling is problematic for several reasons including high rates of omission errors. One potential source of error for modeling widespread species is that subspecies and/or races of species are frequently pooled for analyses, which may mask biologically relevant spatial variation within the distribution of a single widespread species. We contrast a presence-only maximum entropy model for the widely distributed oldfield mouse (Peromyscus polionotus) that includes all available presence locations for this species, with two composite maximum entropy models. The composite models either subdivided the total species distribution into four geographic quadrants or by fifteen subspecies to capture spatially relevant variation in P. polionotus distributions. Results Despite high Area Under the ROC Curve (AUC) values for all models, the composite species distribution model of P. polionotus generated from individual subspecies models represented the known distribution of the species much better than did the models produced by partitioning data into geographic quadrants or modeling the whole species as a single unit. Conclusions Because the AUC values failed to describe the differences in the predictability of the three modeling strategies, we suggest using omission curves in addition to AUC values to assess model performance. Dividing the data of a widespread species into biologically relevant partitions greatly increased the performance of our distribution model; therefore, this approach may prove to be quite practical and informative for a wide range of modeling applications. PMID:21929792

  7. A Combinatorial Geometry Code System with Model Testing Routines.

    1982-10-08

    GIFT, Geometric Information For Targets code system, is used to mathematically describe the geometry of a three-dimensional vehicle such as a tank, truck, or helicopter. The geometric data generated is merged in vulnerability computer codes with the energy effects data of a selected @munition to simulate the probabilities of malfunction or destruction of components when it is attacked by the selected munition. GIFT options include those which graphically display the vehicle, those which check themore » correctness of the geometry data, those which compute physical characteristics of the vehicle, and those which generate the geometry data used by vulnerability codes.« less

  8. Statistical model with a standard Γ distribution

    NASA Astrophysics Data System (ADS)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  9. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  10. Scaling properties and fractality in the distribution of coding segments in eukaryotic genomes revealed through a block entropy approach

    NASA Astrophysics Data System (ADS)

    Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis

    2010-11-01

    Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in

  11. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  12. On distributed memory MPI-based parallelization of SPH codes in massive HPC context

    NASA Astrophysics Data System (ADS)

    Oger, G.; Le Touzé, D.; Guibert, D.; de Leffe, M.; Biddiscombe, J.; Soumagne, J.; Piccinali, J.-G.

    2016-03-01

    Most of particle methods share the problem of high computational cost and in order to satisfy the demands of solvers, currently available hardware technologies must be fully exploited. Two complementary technologies are now accessible. On the one hand, CPUs which can be structured into a multi-node framework, allowing massive data exchanges through a high speed network. In this case, each node is usually comprised of several cores available to perform multithreaded computations. On the other hand, GPUs which are derived from the graphics computing technologies, able to perform highly multi-threaded calculations with hundreds of independent threads connected together through a common shared memory. This paper is primarily dedicated to the distributed memory parallelization of particle methods, targeting several thousands of CPU cores. The experience gained clearly shows that parallelizing a particle-based code on moderate numbers of cores can easily lead to an acceptable scalability, whilst a scalable speedup on thousands of cores is much more difficult to obtain. The discussion revolves around speeding up particle methods as a whole, in a massive HPC context by making use of the MPI library. We focus on one particular particle method which is Smoothed Particle Hydrodynamics (SPH), one of the most widespread today in the literature as well as in engineering.

  13. Entanglement distribution over quantum code-division multiple-access networks

    NASA Astrophysics Data System (ADS)

    Zhu, Chang-long; Yang, Nan; Liu, Yu-xi; Nori, Franco; Zhang, Jing

    2015-10-01

    We present a method for quantum entanglement distribution over a so-called code-division multiple-access network, in which two pairs of users share the same quantum channel to transmit information. The main idea of this method is to use different broadband chaotic phase shifts, generated by electro-optic modulators and chaotic Colpitts circuits, to encode the information-bearing quantum signals coming from different users and then recover the masked quantum signals at the receiver side by imposing opposite chaotic phase shifts. The chaotic phase shifts given to different pairs of users are almost uncorrelated due to the randomness of chaos and thus the quantum signals from different pair of users can be distinguished even when they are sent via the same quantum channel. It is shown that two maximally entangled states can be generated between two pairs of users by our method mediated by bright coherent lights, which can be more easily implemented in experiments compared with single-photon lights. Our method is robust under the channel noises if only the decay rates of the information-bearing fields induced by the channel noises are not quite high. Our study opens up new perspectives for addressing and transmitting quantum information in future quantum networks.

  14. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  15. Reconstruction for distributed video coding: a Markov random field approach with context-adaptive smoothness prior

    NASA Astrophysics Data System (ADS)

    Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu

    2010-07-01

    An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.

  16. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  17. Modelling 2001 lahars at Popocatépetl volcano using FLO2D numerical code

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Capra, L.

    2013-12-01

    Popocatépetl volcano is located on the central part of the Transmexican Volcanic Belt. It is one of the most active volcanoes in Mexico and endanger more than 25 million people that lives in its surroundings. In the last months, the renewal of its volcanic activity put into alert scientific community. One of the possible scenarios is the 2001 explosive activity, which was characterized by a 8 km eruptive column and the subsequent formation of pumice flows up to 4 km from the crater. Lahars were generated few hours after, remobilizing the new deposits towards NE flank of the volcano, along Huiloac Gorge, almost reaching Santiago Xalitzintla town (Capra et al., 2004). The occurrence of a similar scenario makes very important to reproduce this event to delimitate accurately lahar hazard zones. In this work, 2001 lahar deposit is modeled using FLO2D numerical code. Geophone data is used to reconstruct initial hydrograph and sediment concentration. Sensitivity study of most important parameters used by this code like Manning, and α and β coefficients was conducted in order to achieve a good simulation. Results obtained were compared with field data and demonstrated a good agreement in thickness and flow distribution. A comparison with previously published data with laharZ program (Muñoz-Salinas, 2009) is also made. Additionally, lahars with fluctuating sediment concentrations but with similar volume are simulated to observe the influence of the rheological behavior on lahar distribution.

  18. Distributed lag models for hydrological data.

    PubMed

    Rushworth, Alastair M; Bowman, Adrian W; Brewer, Mark J; Langan, Simon J

    2013-06-01

    The distributed lag model (DLM), used most prominently in air pollution studies, finds application wherever the effect of a covariate is delayed and distributed through time. We specify modified formulations of DLMs to provide computationally attractive, flexible varying-coefficient models that are applicable in any setting in which lagged covariates are regressed on a time-dependent response. We investigate the application of such models to rainfall and river flow and in particular their role in understanding the impact of hidden variables at work in river systems. We apply two models to data from a Scottish mountain river, and we fit to some simulated data to check the efficacy of our model approach. During heavy rainfall conditions, changes in the influence of rainfall on flow arise through a complex interaction between antecedent ground wetness and a time-delay in rainfall. The models identify subtle changes in responsiveness to rainfall, particularly in the location of peak influence in the lag structure.

  19. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  20. Multiple-source models for electron beams of a medical linear accelerator using BEAMDP computer code

    PubMed Central

    Jabbari, Nasrollah; Barati, Amir Hoshang; Rahmatnezhad, Leili

    2012-01-01

    Aim The aim of this work was to develop multiple-source models for electron beams of the NEPTUN 10PC medical linear accelerator using the BEAMDP computer code. Background One of the most accurate techniques of radiotherapy dose calculation is the Monte Carlo (MC) simulation of radiation transport, which requires detailed information of the beam in the form of a phase-space file. The computing time required to simulate the beam data and obtain phase-space files from a clinical accelerator is significant. Calculation of dose distributions using multiple-source models is an alternative method to phase-space data as direct input to the dose calculation system. Materials and methods Monte Carlo simulation of accelerator head was done in which a record was kept of the particle phase-space regarding the details of the particle history. Multiple-source models were built from the phase-space files of Monte Carlo simulations. These simplified beam models were used to generate Monte Carlo dose calculations and to compare those calculations with phase-space data for electron beams. Results Comparison of the measured and calculated dose distributions using the phase-space files and multiple-source models for three electron beam energies showed that the measured and calculated values match well each other throughout the curves. Conclusion It was found that dose distributions calculated using both the multiple-source models and the phase-space data agree within 1.3%, demonstrating that the models can be used for dosimetry research purposes and dose calculations in radiotherapy. PMID:24377026

  1. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  2. Modeling the Pion Generalized Parton Distribution

    NASA Astrophysics Data System (ADS)

    Mezrag, C.

    2016-02-01

    We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ansätze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.

  3. Application distribution model and related security attacks in VANET

    NASA Astrophysics Data System (ADS)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  4. Evolutionary model of the personal income distribution

    NASA Astrophysics Data System (ADS)

    Kaldasch, Joachim

    2012-11-01

    The aim of this work is to develop a qualitative picture of the personal income distribution. Treating an economy as a self-organized system the key idea of the model is that the income distribution contains competitive and non-competitive contributions. The presented model distinguishes between three main income classes. 1. Capital income from private firms is shown to be the result of an evolutionary competition between products. A direct consequence of this competition is Gibrat’s law suggesting a lognormal income distribution for small private firms. Taking into account an additional preferential attachment mechanism for large private firms the income distribution is supplemented by a power law (Pareto) tail. 2. Due to the division of labor a diversified labor market is seen as a non-competitive market. In this case wage income exhibits an exponential distribution. 3. Also included is income from a social insurance system. It can be approximated by a Gaussian peak. A consequence of this theory is that for short time intervals a fixed ratio of total labor (total capital) to net income exists (Cobb-Douglas relation). A comparison with empirical high resolution income data confirms this pattern of the total income distribution. The theory suggests that competition is the ultimate origin of the uneven income distribution.

  5. Species Distribution Modeling of Deep Pelagic Eels.

    PubMed

    DeVaney, Shannon C

    2016-10-01

    The ocean's midwaters (the mesopelagic and bathypelagic zones) make up the largest living space on the planet, but are undersampled and relatively poorly understood. The true distribution of many midwater species, let alone the abiotic factors most important in determining that distribution, is not well known. Because collecting specimens and data from the deep ocean is expensive and logistically difficult, it would be useful to be able to predict where species of interest are likely to occur so that sampling effort can be concentrated in appropriate areas. The distribution of two representative midwater fishes, the gulper eel Eurypharynx pelecanoides and the bobtail eel Cyema atrum (Teleostei: Saccopharyngiformes), were modeled with MaxEnt software to examine the viability of species distribution modeling (SDM) for globally distributed midwater fishes using currently available environmental data from the ocean surface and bottom. These species were chosen because they are relatively abundant, easily recognized, and unlikely to have been misidentified in database records, and are true midwater fishes, not known to undertake significant vertical diurnal migration. Models for both species show a generally worldwide distribution with some exceptions, including the Southern Ocean and Bering Sea. Variable contributions show that surface and bottom environmental variables correlate with species presence. Both species are more likely to be found in areas with low levels of silicate. SDM is a promising method for better understanding the ecology of midwater organisms.

  6. The non-power model of the genetic code: a paradigm for interpreting genomic information.

    PubMed

    Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo

    2016-03-13

    In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. PMID:26857679

  7. Physical Model for the Evolution of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Yamashita, Tatsuro; Narikiyo, Osamu

    2011-12-01

    Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.

  8. Distributed Wind Diffusion Model Overview (Presentation)

    SciTech Connect

    Preus, R.; Drury, E.; Sigrin, B.; Gleason, M.

    2014-07-01

    Distributed wind market demand is driven by current and future wind price and performance, along with several non-price market factors like financing terms, retail electricity rates and rate structures, future wind incentives, and others. We developed a new distributed wind technology diffusion model for the contiguous United States that combines hourly wind speed data at 200m resolution with high resolution electricity load data for various consumer segments (e.g., residential, commercial, industrial), electricity rates and rate structures for utility service territories, incentive data, and high resolution tree cover. The model first calculates the economics of distributed wind at high spatial resolution for each market segment, and then uses a Bass diffusion framework to estimate the evolution of market demand over time. The model provides a fundamental new tool for characterizing how distributed wind market potential could be impacted by a range of future conditions, such as electricity price escalations, improvements in wind generator performance and installed cost, and new financing structures. This paper describes model methodology and presents sample results for distributed wind market potential in the contiguous U.S. through 2050.

  9. Evolution and models for skewed parton distributions

    SciTech Connect

    Musatov, I. V.; Radyushkin, A. V.

    2000-04-01

    We discuss the structure of the ''forward visible'' (FV) parts of double and skewed distributions related to the usual distributions through reduction relations. We use factorized models for double distributions (DD's) f(tilde sign)(x,{alpha}) in which one factor coincides with the usual (forward) parton distribution and another specifies the profile characterizing the spread of the longitudinal momentum transfer. The model DD's are used to construct skewed parton distributions (SPD's). For small skewedness, the FV parts of SPD's H(x(tilde sign),{xi}) can be obtained by averaging forward parton densities f(x(tilde sign)-{xi}{alpha}) with the weight {rho}({alpha}) coinciding with the profile function of the double distribution f(tilde sign)(x,{alpha}) at small x. We show that if the x{sup n} moments f(tilde sign){sub n}({alpha}) of DD's have the asymptotic (1-{alpha}{sup 2}){sup n+1} profile, then the {alpha} profile of f(tilde sign)(x,{alpha}) for small x is completely determined by the small-x behavior of the usual parton distribution. We demonstrate that, for small {xi}, the model with asymptotic profiles for f(tilde sign){sub n}({alpha}) is equivalent to that proposed recently by Shuvaev et al., in which the Gegenbauer moments of SPD's do not depend on {xi}. We perform a numerical investigation of the evolution patterns of SPD's and give an interpretation of the results of these studies within the formalism of double distributions. (c) 2000 The American Physical Society.

  10. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  11. EXTENSION OF THE NUCLEAR REACTION MODEL CODE EMPIRE TO ACTINIDES NUCLEAR DATA EVALUATION.

    SciTech Connect

    CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.

    2007-04-22

    Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.

  12. Coding of odors by temporal binding within a model network of the locust antennal lobe.

    PubMed

    Patel, Mainak J; Rangan, Aaditya V; Cai, David

    2013-01-01

    The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin-Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements.

  13. Coding of odors by temporal binding within a model network of the locust antennal lobe

    PubMed Central

    Patel, Mainak J.; Rangan, Aaditya V.; Cai, David

    2013-01-01

    The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin–Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements. PMID:23630495

  14. The APS SASE FEL : modeling and code comparison.

    SciTech Connect

    Biedron, S. G.

    1999-04-20

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  15. Modeling diffuse pollution with a distributed approach.

    PubMed

    León, L F; Soulis, E D; Kouwen, N; Farquhar, G J

    2002-01-01

    The transferability of parameters for non-point source pollution models to other watersheds, especially those in remote areas without enough data for calibration, is a major problem in diffuse pollution modeling. A water quality component was developed for WATFLOOD (a flood forecast hydrological model) to deal with sediment and nutrient transport. The model uses a distributed group response unit approach for water quantity and quality modeling. Runoff, sediment yield and soluble nutrient concentrations are calculated separately for each land cover class, weighted by area and then routed downstream. The distributed approach for the water quality model for diffuse pollution in agricultural watersheds is described in this paper. Integrating the model with data extracted using GIS technology (Geographical Information Systems) for a local watershed, the model is calibrated for the hydrologic response and validated for the water quality component. With the connection to GIS and the group response unit approach used in this paper, model portability increases substantially, which will improve non-point source modeling at the watershed scale level.

  16. Hot Water Distribution System Model Enhancements

    SciTech Connect

    Hoeschele, M.; Weitzel, E.

    2012-11-01

    This project involves enhancement of the HWSIM distribution system model to more accurately model pipe heat transfer. Recent laboratory testing efforts have indicated that the modeling of radiant heat transfer effects is needed to accurately characterize piping heat loss. An analytical methodology for integrating radiant heat transfer was implemented with HWSIM. Laboratory test data collected in another project was then used to validate the model for a variety of uninsulated and insulated pipe cases (copper, PEX, and CPVC). Results appear favorable, with typical deviations from lab results less than 8%.

  17. Implementation of a simple model for linear and nonlinear mixing at unstable fluid interfaces in hydrodynamics codes

    SciTech Connect

    Ramshaw, J D

    2000-10-01

    A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.

  18. A Zonal Climate Model for the 1-D Mars Evolution Code: Explaining Meridiani Planum.

    NASA Astrophysics Data System (ADS)

    Manning, C. V.; McKay, C. P.; Zahnle, K. J.

    2005-12-01

    Recent MER Opportunity observations suggest there existed an extensive body of shallow water in the present Meridiani Planum during the late Noachian [1]. Observations of roughly contemporaneous valley networks show little net erosion [2]. Hypsometric analysis [3] finds that martian drainage basins are similar to terrestrial drainage basins in very arid regions. The immaturity of martian drainage basins suggests they were formed by infrequent fluvial action. If similar fluvial discharges are responsible for the laminations in the salt-bearing outcrops of Meridiani Planum, their explanation may require a climate model based on surface thermal equilibrium with diurnally averaged temperatures greater than freezing. In the context of Mars' chaotic obliquity, invoking a moderately thick atmosphere with seasonal insolation patterns may uncover the conditions under which the outcrops formed. We compounded a 1-D model of the evolution of Mars' inventories of CO2 over its lifetime called the Mars Evolution Code (MEC) [4]. We are assembling a zonal climate model that includes meridional heat transport, heat conduction to/from the regolith, latent heat deposition, and an albedo distribution based on the depositional patterns of ices. Since water vapor is an important greenhouse gas, and whose ice affects the albedo, we must install a full hydrological cycle. This requires a thermal model of the regolith to model diffusion of water vapor to/from a permafrost layer. Our model carries obliquity and eccentricity distributions consistent with Laskar et al. [5], so we will be able to model the movement of the ice cap with changes in obliquity. The climate model will be used to investigate the conditions under which ponded water could have occurred in the late Noachian, thus supplying a constraint on the free inventory of CO2 at that time. Our evolution code can then investigate Hesperian and Amazonian climates. The model could also be used to understand evidence of recent climate

  19. Coding techniques for secure digital communications for unit protection of distribution feeders

    SciTech Connect

    Redfern, M.A.; McGuinness, D.P.; Ormondroyd, R.F.

    1996-04-01

    The dramatic growth in new designs of microprocessor relays has led to a growth in the use digital communications for protection. Unfortunately in any communication system there will always be some corruption of the received data. Part of the art and science of relay design is therefore to take this into account. This paper examines coding techniques designed to minimize the probability of corrupted data being declared as healthy. Message size, coding techniques and interleaving are examined with respect to the choice of a coding strategy for a secure data communication system for unit protection.

  20. The modeling of core melting and in-vessel corium relocation in the APRIL code

    SciTech Connect

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T.

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  1. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  2. Evaluation of a parallel FDTD code and application to modeling of light scattering by deformed red blood cells.

    PubMed

    Brock, R Scott; Hu, Xin-Hua; Yang, Ping; Lu, Jun

    2005-07-11

    A parallel Finite-Difference-Time-Domain (FDTD) code has been developed to numerically model the elastic light scattering by biological cells. Extensive validation and evaluation on various computing clusters demonstrated the high performance of the parallel code and its significant potential of reducing the computational cost of the FDTD method with low cost computer clusters. The parallel FDTD code has been used to study the problem of light scattering by a human red blood cell (RBC) of a deformed shape in terms of the angular distributions of the Mueller matrix elements. The dependence of the Mueller matrix elements on the shape and orientation of the deformed RBC has been investigated. Analysis of these data provides valuable insight on determination of the RBC shapes using the method of elastic light scattering measurements.

  3. Generalized rate-code model for neuron ensembles with finite populations

    SciTech Connect

    Hasegawa, Hideo

    2007-05-15

    We have proposed a generalized Langevin-type rate-code model subjected to multiplicative noise, in order to study stationary and dynamical properties of an ensemble containing a finite number N of neurons. Calculations using the Fokker-Planck equation have shown that, owing to the multiplicative noise, our rate model yields various kinds of stationary non-Gaussian distributions such as {gamma}, inverse-Gaussian-like, and log-normal-like distributions, which have been experimentally observed. The dynamical properties of the rate model have been studied with the use of the augmented moment method (AMM), which was previously proposed by the author from a macroscopic point of view for finite-unit stochastic systems. In the AMM, the original N-dimensional stochastic differential equations (DEs) are transformed into three-dimensional deterministic DEs for the means and fluctuations of local and global variables. The dynamical responses of the neuron ensemble to pulse and sinusoidal inputs calculated by the AMM are in good agreement with those obtained by direct simulation. The synchronization in the neuronal ensemble is discussed. The variabilities of the firing rate and of the interspike interval are shown to increase with increasing magnitude of multiplicative noise, which may be a conceivable origin of the observed large variability in cortical neurons.

  4. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations.

  5. General closed-form bit-error rate expressions for coded M-distributed atmospheric optical communications.

    PubMed

    Balsells, José M Garrido; López-González, Francisco J; Jurado-Navas, Antonio; Castillo-Vázquez, Miguel; Notario, Antonio Puerta

    2015-07-01

    In this Letter, general closed-form expressions for the average bit error rate in atmospheric optical links employing rate-adaptive channel coding are derived. To characterize the irradiance fluctuations caused by atmospheric turbulence, the Málaga or M distribution is employed. The proposed expressions allow us to evaluate the performance of atmospheric optical links employing channel coding schemes such as OOK-GSc, OOK-GScc, HHH(1,13), or vw-MPPM with different coding rates and under all regimes of turbulence strength. A hyper-exponential fitting technique applied to the conditional bit error rate is used in all cases. The proposed closed-form expressions are validated by Monte-Carlo simulations. PMID:26125336

  6. Aerosol Behavior Log-Normal Distribution Model.

    2001-10-22

    HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure,more » and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.« less

  7. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q.; Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  8. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    SciTech Connect

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code which has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker

  9. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  10. The dynamic neural filter: a binary model of spatiotemporal coding.

    PubMed

    Quenet, Brigitte; Horn, David

    2003-02-01

    We describe and discuss the properties of a binary neural network that can serve as a dynamic neural filter (DNF), which maps regions of input space into spatiotemporal sequences of neuronal activity. Both deterministic and stochastic dynamics are studied, allowing the investigation of the stability of spatiotemporal sequences under noisy conditions. We define a measure of the coding capacity of a DNF and develop an algorithm for constructing a DNF that can serve as a source of given codes. On the basis of this algorithm, we suggest using a minimal DNF capable of generating observed sequences as a measure of complexity of spatiotemporal data. This measure is applied to experimental observations in the locust olfactory system, whose reverberating local field potential provides a natural temporal scale allowing the use of a binary DNF. For random synaptic matrices, a DNF can generate very large cycles, thus becoming an efficient tool for producing spatiotemporal codes. The latter can be stabilized by applying to the parameters of the DNF a learning algorithm with suitable margins.

  11. Joint physical and numerical modeling of water distribution networks.

    SciTech Connect

    Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.; Kajder, Karen C.; Webb, Stephen Walter; Cappelle, Malynda A.; Khalsa, Siri Sahib; Wright, Jerome L.; Sun, Amy Cha-Tien; Chwirka, J. Benjamin; Hartenberger, Joel David; McKenna, Sean Andrew; van Bloemen Waanders, Bart Gustaaf; McGrath, Lucas K.; Ho, Clifford Kuofei

    2009-01-01

    This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in some cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.

  12. Meson distribution amplitudes in holographic models

    NASA Astrophysics Data System (ADS)

    Hwang, Chien-Wen

    2012-07-01

    We study the wave functions of light and heavy mesons in both hard-wall (HW) and soft-wall (SW) holographic models which use AdS/CFT correspondence. In the case of massless constituents, the asymptotic behaviors of the electromagnetic form factor, the distribution amplitudes, and the decay constants for the two models are the same, if the relation between the dilaton scale parameter and the size of meson is an inverse proportion. On the other hand, by introducing a quark mass dependence in the wave function, the differences of the distribution amplitudes between the two models are obvious. In addition, for the SW model, the dependences of the decay constants of meson on the dilaton scale parameter κ differ; especially fQq˜κ3/mQ2 is consistent with the prediction of the heavy quark effective theory if κ˜mQ1/2. Thus the parameters of the two models are fit by the decay constants of the distinct mesons; the distribution amplitudes and the ξ-moments are calculated and compared.

  13. A void distribution model-flashing flow

    SciTech Connect

    Riznic, J.; Ishii, M.; Afgan, N.

    1987-01-01

    A new model for flashing flow based on wall nucleations is proposed here and the model predictions are compared with some experimental data. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites was used. Thus it was possible to avoid the usual assumption of a constant bubble number density. Comparisons of the model with the data shows that the model based on the nucleation site density correlation appears to be acceptable to describe the vapor generation in the flashing flow. For the limited data examined, the comparisons show rather satisfactory agreement without using a floating parameter to adjust the model. This result indicated that, at least for the experimental conditions considered here, the mechanistic predictions of the flashing phenomenon is possible on the present wall nucleation based model.

  14. Modeling Mosquito Distribution. Impact of the Landscape

    NASA Astrophysics Data System (ADS)

    Dumont, Y.

    2011-09-01

    In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide mosquito density estimate and mosquito distribution, taking into account the environment and entomological knowledges. Mosquito dispersal modeling, together with a compartmental approach, leads to a quasilinear parabolic system. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering various landscapes, we show that the environment can have a strong influence on mosquito distribution and, thus, in the efficiency or not of vector control.

  15. A convolutional code-based sequence analysis model and its application.

    PubMed

    Liu, Xiao; Geng, Xiaoli

    2013-04-16

    A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.

  16. Stark effect modeling in the detailed opacity code SCO-RCG

    NASA Astrophysics Data System (ADS)

    Pain, J.-C.; Gilleron, F.; Gilles, D.

    2016-05-01

    The broadening of lines by Stark effect is an important tool for inferring electron density and temperature in plasmas. Stark-effect calculations often rely on atomic data (transition rates, energy levels,...) not always exhaustive and/or valid for isolated atoms. We present a recent development in the detailed opacity code SCO-RCG for K-shell spectroscopy (hydrogen- and helium-like ions). This approach is adapted from the work of Gilles and Peyrusse. Neglecting non-diagonal terms in dipolar and collision operators, the line profile is expressed as a sum of Voigt functions associated to the Stark components. The formalism relies on the use of parabolic coordinates within SO(4) symmetry. The relativistic fine-structure of Lyman lines is included by diagonalizing the hamiltonian matrix associated to quantum states having the same principal quantum number n. The resulting code enables one to investigate plasma environment effects, the impact of the microfield distribution, the decoupling between electron and ion temperatures and the role of satellite lines (such as Li-like 1snℓn'ℓ' — 1s 2 nℓ, Be-like, etc.). Comparisons with simpler and widely-used semi-empirical models are presented.

  17. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    ERIC Educational Resources Information Center

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  18. Stochastic model of homogeneous coding and latent periodicity in DNA sequences.

    PubMed

    Chaley, Maria; Kutyrkin, Vladimir

    2016-02-01

    The concept of latent triplet periodicity in coding DNA sequences which has been earlier extensively discussed is confirmed in the result of analysis of a number of eukaryotic genomes, where latent periodicity of a new type, called profile periodicity, is recognized in the CDSs. Original model of Stochastic Homogeneous Organization of Coding (SHOC-model) in textual string is proposed. This model explains the existence of latent profile periodicity and regularity in DNA sequences. PMID:26656186

  19. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  20. Distributed static linear Gaussian models using consensus.

    PubMed

    Belanovic, Pavle; Valcarcel Macua, Sergio; Zazo, Santiago

    2012-10-01

    Algorithms for distributed agreement are a powerful means for formulating distributed versions of existing centralized algorithms. We present a toolkit for this task and show how it can be used systematically to design fully distributed algorithms for static linear Gaussian models, including principal component analysis, factor analysis, and probabilistic principal component analysis. These algorithms do not rely on a fusion center, require only low-volume local (1-hop neighborhood) communications, and are thus efficient, scalable, and robust. We show how they are also guaranteed to asymptotically converge to the same solution as the corresponding existing centralized algorithms. Finally, we illustrate the functioning of our algorithms on two examples, and examine the inherent cost-performance trade-off.

  1. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1992-01-01

    The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.

  2. Distributed earth model/orbiter simulation

    NASA Technical Reports Server (NTRS)

    Geisler, Erik; Mcclanahan, Scott; Smith, Gary

    1989-01-01

    Distributed Earth Model/Orbiter Simulation (DEMOS) is a network based application developed for the UNIX environment that visually monitors or simulates the Earth and any number of orbiting vehicles. Its purpose is to provide Mission Control Center (MCC) flight controllers with a visually accurate three dimensional (3D) model of the Earth, Sun, Moon and orbiters, driven by real time or simulated data. The project incorporates a graphical user interface, 3D modelling employing state-of-the art hardware, and simulation of orbital mechanics in a networked/distributed environment. The user interface is based on the X Window System and the X Ray toolbox. The 3D modelling utilizes the Programmer's Hierarchical Interactive Graphics System (PHIGS) standard and Raster Technologies hardware for rendering/display performance. The simulation of orbiting vehicles uses two methods of vector propagation implemented with standard UNIX/C for portability. Each part is a distinct process that can run on separate nodes of a network, exploiting each node's unique hardware capabilities. The client/server communication architecture of the application can be reused for a variety of distributed applications.

  3. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  4. Modeling of a-particle redistribution by sawteeth in TFTR using FPPT code

    SciTech Connect

    Gorelenkov, N.N.; Budny, R.V.; Duong, H.H.

    1996-06-01

    Results from recent DT experiments on TFTR to measure the radial density profiles of fast confined well trapped {alpha}-particles using the Pellet Charge eXchange (PCX) diagnostic [PETROV M. P., et. al. Nucl. Fusion, 35 (1995) 1437] indicate that sawtooth oscillations produce a significant broadening of the trapped alpha radial density profiles. ` Conventional models consistent with measured sawtooth effects on passing particles do not provide satisfactory simulations of the trapped particle mixing measured by PCX diagnostic. We propose a different mechanism for fast particle mixing during the sawtooth crash to explain the trapped {alpha}-particle density profile broadening after the crash. The model is based on the fast particle orbit averaged toroidal drift in a perturbed helical electric field with an adjustable absolute value. Such a drift of the fast particles results in a change of their energy and a redistribution in phase space. The energy redistribution is shown to obey the diffusion equation, while the redistribution in toroidal momentum P{var_phi} (or in minor radius) is assumed stochastic with large diffusion coefficient and was taken flat. The distribution function in a pre- sawtooth plasma and its evolution in a post-sawtooth crash plasma is simulated using the Fokker-Planck Post-TRANSP (FPPT) processor code. It is shown that FPPT calculated {alpha}-particle distributions are consistent with TRANSP Monte-Carlo calculations. Comparison of FPPT results with Pellet Char eXchange (PCX) measurements shows good agreement for 9 both sawtooth free and sawtoothing plasmas.

  5. Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC

    SciTech Connect

    Talou, P.; Chadwick, M. B.; Dietrich, F. S.; Herman, M.; Kawano, T.; Konig, A.; Obložinský, P.

    2004-01-01

    The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.

  6. Code System for Calculating Ion Track Condensed Collision Model.

    1997-05-21

    Version 00 ICOM calculates the transport characteristics of ion radiation for applicaton to radiation protection, dosimetry and microdosimetry, and radiation physics of solids. Ions in the range Z=1-92 are handled. The energy range for protons is 0.001-10,000 MeV. For other ions the energy range is 0.001-100MeV/nucleon. Computed quantities include stopping powers, ranges; spatial, angular and energy distributions of particle current and fluence; spatial distributions of the absorbed dose; and spatial distributions of thermalized ions.

  7. Modeling heavy ion ionization loss in the MARS15 code

    SciTech Connect

    Rakhno, I.L.; Mokhov, N.V.; Striganov, S.I.; /Fermilab

    2005-05-01

    The needs of various accelerator and space projects stimulated recent developments to the MARS Monte Carlo code. One of the essential parts of those is heavy ion ionization energy loss. This paper describes an implementation of several corrections to dE/dx in order to take into account the deviations from the Bethe theory at low and high energies as well as the effect of a finite nuclear size at ultrarelativistic energies. Special attention is paid to the transition energy region where the onset of the effect of a finite nuclear size is observed. Comparisons with experimental data and NIST data are presented.

  8. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  9. Grid-Xinanjiang Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Li, Z.; Yao, C.; Yu, Z.

    2009-12-01

    The grid-based distributed Xinanjiang (Grid-Xinanjiang) model by combining the well-tested conceptual rainfall-runoff model and the physically based flow routing model has been developed for hydrologic processes simulation and flood forecasting. The DEM is utilized to derive the flow direction, routing sequencing, hillslope and channel slopes. The developed model includes canopy interception, direct channel precipitation, evapotranspiration, as well as runoff generation via saturation excess mechanism. The diffusion wave considering the influent of upstream inflow, direct channel precipitation and flow partition to the channels is developed to route the hillslope and channel flow on a cell basis. The Grid-Xinanjiang model is applied at a 1-km grid scale in a nested basin located in Huaihe basin, China. The basin with the drainage area of 2692.7 km2, contains five internal points where observed streamflow data are available, and is used to evaluate the developed model for its’ ability on the simulation of hydrologic processes within the basin. Calibration and verification of the Grid-Xinanjiang model are carried out at both daily and hourly time steps. The model is assessed by comparing streamflow and water stage simulation to observations at the basin outlet and gauging stations within the basin and also compared with these simulated with the original Xinanjiang model. The results indicate that the parameter estimation approach is efficient and the developed model can forecast the streamflow and stage hydrograph well.

  10. A model for non-monotonic intensity coding

    PubMed Central

    Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V. M.; Yarali, Ayse

    2015-01-01

    Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus–response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666

  11. Higher-order ionosphere modeling for CODE's next reprocessing activities

    NASA Astrophysics Data System (ADS)

    Lutz, S.; Schaer, S.; Meindl, M.; Dach, R.; Steigenberger, P.

    2009-12-01

    CODE (the Center for Orbit Determination in Europe) is a joint venture between the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Agency for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Institut für Astronomische und Phsyikalische Geodäsie of the Technische Universität München (IAPG/TUM, Munich, Germany). It acts as one of the global analysis centers of the International GNSS Service (IGS) and participates in the first IGS reprocessing campaign, a full reanalysis of GPS data collected since 1994. For a future reanalyis of the IGS data it is planned to consider not only first-order but also higher-order ionosphere terms in the space geodetic observations. There are several works (e.g. Fritsche et al. 2005), which showed a significant and systematic influence of these effects on the analysis results. The development version of the Bernese Software used at CODE is expanded by the ability to assign additional (scaling) parameters to each considered higher-order ionosphere term. By this, each correction term can be switched on and off on normal-equation level and, moreover, the significance of each correction term may be verified on observation level for different ionosphere conditions.

  12. Code interoperability and standard data formats in quantum chemistry and quantum dynamics: The Q5/D5Cost data model.

    PubMed

    Rossi, Elda; Evangelisti, Stefano; Laganà, Antonio; Monari, Antonio; Rampino, Sergio; Verdicchio, Marco; Baldridge, Kim K; Bendazzoli, Gian Luigi; Borini, Stefano; Cimiraglia, Renzo; Angeli, Celestino; Kallay, Peter; Lüthi, Hans P; Ruud, Kenneth; Sanchez-Marin, José; Scemama, Anthony; Szalay, Peter G; Tajti, Attila

    2014-03-30

    Code interoperability and the search for domain-specific standard data formats represent critical issues in many areas of computational science. The advent of novel computing infrastructures such as computational grids and clouds make these issues even more urgent. The design and implementation of a common data format for quantum chemistry (QC) and quantum dynamics (QD) computer programs is discussed with reference to the research performed in the course of two Collaboration in Science and Technology Actions. The specific data models adopted, Q5Cost and D5Cost, are shown to work for a number of interoperating codes, regardless of the type and amount of information (small or large datasets) to be exchanged. The codes are either interfaced directly, or transfer data by means of wrappers; both types of data exchange are supported by the Q5/D5Cost library. Further, the exchange of data between QC and QD codes is addressed. As a proof of concept, the H + H2 reaction is discussed. The proposed scheme is shown to provide an excellent basis for cooperative code development, even across domain boundaries. Moreover, the scheme presented is found to be useful also as a production tool in the grid distributed computing environment.

  13. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  14. Spatio-temporal Modeling of Mosquito Distribution

    NASA Astrophysics Data System (ADS)

    Dumont, Y.; Dufourd, C.

    2011-11-01

    We consider a quasilinear parabolic system to model mosquito displacement. In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide density estimates of mosquito populations, taking into account the environment and entomological knowledges. After a brief introduction to mosquito dispersal modeling, we present some theoretical results. Then, considering a compartmental approach, we get a quasilinear system of PDEs. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering vector control scenarii, we show that the environment can have a strong influence on mosquito distribution and in the efficiency of vector control tools.

  15. Quantitative mass distribution models for Mare Orientale

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.; Smith, J. C.

    1976-01-01

    Six theoretical models for the mass distribution of Mare Orientale were tested using five gravity profiles extracted from radio-tracking data of orbiting spacecraft. The models with surface mass and moho relief produced the best results. Although there is a mascon-type anomaly in the central maria region, Mare Orientale is a large negative gravity anomaly. This is produced primarily by the empty ring basin. Had the basin filled with maria material it seems likely that it would have produced a mascon such as those presently existing in flooded frontside circular basins.

  16. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    PubMed Central

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-01

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597

  17. Purifying selection shapes the coincident SNP distribution of primate coding sequences.

    PubMed

    Chen, Chia-Ying; Hung, Li-Yuan; Wu, Chan-Shuo; Chuang, Trees-Juen

    2016-01-01

    Genome-wide analysis has observed an excess of coincident single nucleotide polymorphisms (coSNPs) at human-chimpanzee orthologous positions, and suggested that this is due to cryptic variation in the mutation rate. While this phenomenon primarily corresponds with non-coding coSNPs, the situation in coding sequences remains unclear. Here we calculate the observed-to-expected ratio of coSNPs (coSNPO/E) to estimate the prevalence of human-chimpanzee coSNPs, and show that the excess of coSNPs is also present in coding regions. Intriguingly, coSNPO/E is much higher at zero-fold than at nonzero-fold degenerate sites; such a difference is due to an elevation of coSNPO/E at zero-fold degenerate sites, rather than a reduction at nonzero-fold degenerate ones. These trends are independent of chimpanzee subpopulation, population size, or sequencing techniques; and hold in broad generality across primates. We find that this discrepancy cannot fully explained by sequence contexts, shared ancestral polymorphisms, SNP density, and recombination rate, and that coSNPO/E in coding sequences is significantly influenced by purifying selection. We also show that selection and mutation rate affect coSNPO/E independently, and coSNPs tend to be less damaging and more correlated with human diseases than non-coSNPs. These suggest that coSNPs may represent a "signature" during primate protein evolution. PMID:27255481

  18. Purifying selection shapes the coincident SNP distribution of primate coding sequences

    PubMed Central

    Chen, Chia-Ying; Hung, Li-Yuan; Wu, Chan-Shuo; Chuang, Trees-Juen

    2016-01-01

    Genome-wide analysis has observed an excess of coincident single nucleotide polymorphisms (coSNPs) at human-chimpanzee orthologous positions, and suggested that this is due to cryptic variation in the mutation rate. While this phenomenon primarily corresponds with non-coding coSNPs, the situation in coding sequences remains unclear. Here we calculate the observed-to-expected ratio of coSNPs (coSNPO/E) to estimate the prevalence of human-chimpanzee coSNPs, and show that the excess of coSNPs is also present in coding regions. Intriguingly, coSNPO/E is much higher at zero-fold than at nonzero-fold degenerate sites; such a difference is due to an elevation of coSNPO/E at zero-fold degenerate sites, rather than a reduction at nonzero-fold degenerate ones. These trends are independent of chimpanzee subpopulation, population size, or sequencing techniques; and hold in broad generality across primates. We find that this discrepancy cannot fully explained by sequence contexts, shared ancestral polymorphisms, SNP density, and recombination rate, and that coSNPO/E in coding sequences is significantly influenced by purifying selection. We also show that selection and mutation rate affect coSNPO/E independently, and coSNPs tend to be less damaging and more correlated with human diseases than non-coSNPs. These suggest that coSNPs may represent a “signature” during primate protein evolution. PMID:27255481

  19. Modeling wealth distribution in growing markets

    NASA Astrophysics Data System (ADS)

    Basu, Urna; Mohanty, P. K.

    2008-10-01

    We introduce an auto-regressive model which captures the growing nature of realistic markets. In our model agents do not trade with other agents, they interact indirectly only through a market. Change of their wealth depends, linearly on how much they invest, and stochastically on how much they gain from the noisy market. The average wealth of the market could be fixed or growing. We show that in a market where investment capacity of agents differ, average wealth of agents generically follow the Pareto-law. In few cases, the individual distribution of wealth of every agentcould also be obtained exactly. We also show that the underlying dynamics of other well studied kinetic models of markets can be mapped to the dynamics of our auto-regressive model.

  20. XSOR codes users manual

    SciTech Connect

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.

  1. Inverse distributed hydrological modelling of Alpine catchments

    NASA Astrophysics Data System (ADS)

    Kunstmann, H.; Krause, J.; Mayr, S.

    2006-06-01

    Even in physically based distributed hydrological models, various remaining parameters must be estimated for each sub-catchment. This can involve tremendous effort, especially when the number of sub-catchments is large and the applied hydrological model is computationally expensive. Automatic parameter estimation tools can significantly facilitate the calibration process. Hence, we combined the nonlinear parameter estimation tool PEST with the distributed hydrological model WaSiM. PEST is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. WaSiM is a fully distributed hydrological model using physically based algorithms for most of the process descriptions. WaSiM was applied to the alpine/prealpine Ammer River catchment (southern Germany, 710 km2 in a 100×100 m2 horizontal resolution. The catchment is heterogeneous in terms of geology, pedology and land use and shows a complex orography (the difference of elevation is around 1600 m). Using the developed PEST-WaSiM interface, the hydrological model was calibrated by comparing simulated and observed runoff at eight gauges for the hydrologic year 1997 and validated for the hydrologic year 1993. For each sub-catchment four parameters had to be calibrated: the recession constants of direct runoff and interflow, the drainage density, and the hydraulic conductivity of the uppermost aquifer. Additionally, five snowmelt specific parameters were adjusted for the entire catchment. Altogether, 37 parameters had to be calibrated. Additional a priori information (e.g. from flood hydrograph analysis) narrowed the parameter space of the solutions and improved the non-uniqueness of the fitted values. A reasonable quality of fit was achieved. Discrepancies between modelled and observed runoff were also due to the small number of meteorological stations and corresponding interpolation artefacts in the orographically complex terrain. Application of a 2-dimensional numerical

  2. Measurement and modeling of strength distributions associated with grinding damage

    SciTech Connect

    Salem, J.A.; Nemeth, N.N.; Powers, L.M.

    1995-08-01

    The strength of ceramic material is typically measured in accordance with ASTM C1161 which specifies the machined specimens be ground uniaxially in the longitudinal direction and tested so that the maximum principal stress is longitudinal. Such a grinding process typically induces minimal damage in the transverse direction, but significant damage in the longitudinal direction, resulting in an anisotropic flaw distribution on the surface of the specimen. Additionally, investigations of the strength anisotropy due to grinding may provide a means to measure a materials strength response under mixed mode (I & II) conditions, thereby providing information that can be applied to isotropic cases (e.g. polished or as-processed material). The objective of this work was to measure and model the effects of a typical, uniaxial grinding process on the strength distribution of a ceramic material under various lading conditions. The fast-fracture strength of a sintered alpha silicon carbide was measured in four-point flexure with the principal stress oriented at angles between 0 and 90{degrees} relative to the grinding direction. Also, uniaxially ground plate specimens were loaded biaxial flexure. Finally, flexure specimens were tested in an annealed condition to determine if the machining damage could be healed. Modeling of the strength distributions was done with two and three parameter Weibull models and shear sensitive and insensitive models. Alpha silicon carbide was chosen because it exhibits a very low fracture toughness, no crack growth resistance, high elastic modulus and a very low susceptibility to slow crack growth (static fatigue). Such properties should make this an ideal ceramic for the verification of fast fracture reliability models and codes.

  3. Viscosity distribution in the mantle convection models

    NASA Astrophysics Data System (ADS)

    Trubitsyn, V. P.

    2016-09-01

    Viscosity is a fundamental property of the mantle which determines the global geodynamical processes. According to the microscopic theory of defects and laboratory experiments, viscosity exponentially depends on temperature and pressure, with activation energy and activation volume being the parameters. The existing laboratory measurements are conducted with much higher strain rates than in the mantle and have significant uncertainty. The data on postglacial rebound only allow the depth distributions of viscosity to be reconstructed. Therefore, spatial distributions (along the depth and lateral) are as of now determined from the models of mantle convection which are calculated by the numerical solution of the convection equations, together with the viscosity dependences on pressure and temperature ( PT-dependences). The PT-dependences of viscosity which are presently used in the numerical modeling of convection give a large scatter in the estimates for the lower mantle, which reaches several orders of magnitude. In this paper, it is shown that it is possible to achieve agreement between the calculated depth distributions of viscosity throughout the entire mantle and the postglacial rebound data. For this purpose, the values of the volume and energy of activation for the upper mantle can be taken from the laboratory experiments, and for the lower mantle, the activation volume should be reduced twice at the 660-km phase transition boundary. Next, the reduction in viscosity by an order of magnitude revealed at the depths below 2000 km by the postglacial rebound data can be accounted for by the presence of heavy hot material at the mantle bottom in the LLSVP zones. The models of viscosity spatial distribution throughout the entire mantle with the lithospheric plates are presented.

  4. An Advanced simulation Code for Modeling Inductive Output Tubes

    SciTech Connect

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  5. Modeling of Ionization Physics with the PIC Code OSIRIS

    SciTech Connect

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  6. Comparison between fully distributed model and semi-distributed model in urban hydrology modeling

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe

    2013-04-01

    Water management in urban areas is becoming more and more complex, especially because of a rapid increase of impervious areas. There will also possibly be an increase of extreme precipitation due to climate change. The aims of the devices implemented to handle the large amount of water generate by urban areas such as storm water retention basins are usually twofold: ensure pluvial flood protection and water depollution. These two aims imply opposite management strategies. To optimize the use of these devices there is a need to implement urban hydrological models and improve fine-scale rainfall estimation, which is the most significant input. In this paper we suggest to compare two models and their sensitivity to small-scale rainfall variability on a 2.15 km2 urban area located in the County of Val-de-Marne (South-East of Paris, France). The average impervious coefficient is approximately 34%. In this work two types of models are used. The first one is CANOE which is semi-distributed. Such models are widely used by practitioners for urban hydrology modeling and urban water management. Indeed, they are easily configurable and the computation time is reduced, but these models do not take fully into account either the variability of the physical properties or the variability of the precipitations. An alternative is to use distributed models that are harder to configure and require a greater computation time, but they enable a deeper analysis (especially at small scales and upstream) of the processes at stake. We used the Multi-Hydro fully distributed model developed at the Ecole des Ponts ParisTech. It is an interacting core between open source software packages, each of them representing a portion of the water cycle in urban environment. Four heavy rainfall events that occurred between 2009 and 2011 are analyzed. The data comes from the Météo-France radar mosaic and the resolution is 1 km in space and 5 min in time. The closest radar of the Météo-France network is

  7. Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2

    NASA Technical Reports Server (NTRS)

    Daley, P. L.; Owens, S. F.

    1988-01-01

    A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.

  8. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

    SciTech Connect

    Winters, W.S.; Evans, G.H.; Moen, C.D.

    1996-10-01

    This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

  9. FREYA-a new Monte Carlo code for improved modeling of fission chains

    SciTech Connect

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  10. Pseudoabsence Generation Strategies for Species Distribution Models

    PubMed Central

    Hanberry, Brice B.; He, Hong S.; Palik, Brian J.

    2012-01-01

    Background Species distribution models require selection of species, study extent and spatial unit, statistical methods, variables, and assessment metrics. If absence data are not available, another important consideration is pseudoabsence generation. Different strategies for pseudoabsence generation can produce varying spatial representation of species. Methodology We considered model outcomes from four different strategies for generating pseudoabsences. We generating pseudoabsences randomly by 1) selection from the entire study extent, 2) a two-step process of selection first from the entire study extent, followed by selection for pseudoabsences from areas with predicted probability <25%, 3) selection from plots surveyed without detection of species presence, 4) a two-step process of selection first for pseudoabsences from plots surveyed without detection of species presence, followed by selection for pseudoabsences from the areas with predicted probability <25%. We used Random Forests as our statistical method and sixteen predictor variables to model tree species with at least 150 records from Forest Inventory and Analysis surveys in the Laurentian Mixed Forest province of Minnesota. Conclusions Pseudoabsence generation strategy completely affected the area predicted as present for species distribution models and may be one of the most influential determinants of models. All the pseudoabsence strategies produced mean AUC values of at least 0.87. More importantly than accuracy metrics, the two-step strategies over-predicted species presence, due to too much environmental distance between the pseudoabsences and recorded presences, whereas models based on random pseudoabsences under-predicted species presence, due to too little environmental distance between the pseudoabsences and recorded presences. Models using pseudoabsences from surveyed plots produced a balance between areas with high and low predicted probabilities and the strongest relationship between

  11. Climate Model Evaluation in Distributed Environments.

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.

    2014-12-01

    As the volume of climate-model-generated and observational data increases, it has become infeasible to perform large-scale comparisons of model output against observations by moving the data to a central location. Data reduction techniques, such as gridding or subsetting, can reduce data volume, but also sacrifice information about spatial and temporal variability that may be important for the comparison. Alternatively, it is generally recognized that "moving the computaton to the data" is more efficient for leveraging large data sets. In the spirit of the latter approach, we describe a new methodology for comparing time series structure in model-generated and observational time series when those data are stored on different computers. The method involves simulating the sampling distribution of the difference between a statistic computed from the model output and the same statistic computed from the observed data. This is accomplished with separate wavelet decompositions of the two time series on their respective local machines, and the transmission of only a very small set of information computed from the wavelet coefficients. The smaller that set is, the cheaper it is to transmit, but also the less accurate will be the result. From the standpoint of the analysis of distributed data, the main question concerns the nature of that trade-off. In this talk, we describe the comparison methodology and the results of some preliminary studies on the cost-accuracy trade-off.

  12. Modelling the Distribution of Globular Cluster Masses

    NASA Astrophysics Data System (ADS)

    McLaughlin, Dean E.; Pudritz, Ralph E.

    1994-12-01

    On the basis of various observational evidence, we argue that the overall present-day distribution of mass in globular cluster systems around galaxies as diverse as M87 and the Milky Way may be in large part reflective of robust formation processes, and little influenced by subsequent dynamical evolution of the globulars. With this in mind, Harris & Pudritz (1994, ApJ, 429, 177) have recently suggested that globular clusters with a range of masses are formed in pregalactic ``supergiant molecular clouds'' which grow by (coalescent) binary collisions with other clouds. We develop this idea more fully by solving for the steady-state mass distributions resulting from such coalescent encounters, with provisions made for the disruption of high-mass clouds due to star formation. Agglomeration models have been proposed in various guises to explain the mass spectra of planetesimals, stars, giant molecular clouds and their cores, and galaxies. The present theory generalizes aspects of these models, and appears able to account for the distribution of globular cluster masses at least above the so-called ``turnover'' of the globular cluster luminosity function.

  13. SPEEDES for distributed information enterprise modeling

    NASA Astrophysics Data System (ADS)

    Hanna, James P.; Hillman, Robert G.

    2002-07-01

    The Air Force is developing a Distributed Information Enterprise Modeling and Simulation (DIEMS) framework under sponsorship of the High Performance Computer Modernization Office Common High Performance Computing Software Support Initiative (HPCMO/CHSSI). The DIEMS framework provides a design analysis environment for deployable distributed information management systems. DIEMS establishes the necessary analysis capability allowing developers to identify and mitigate programmatic risk early within the development cycle to allow successful deployment of the associated systems. The enterprise-modeling framework builds upon the Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) foundation. This simulation framework will utilize 'Challenge Problem' class resources to address more than five million information objects and hundreds of thousands of clients comprising the future information based force structure. The simulation framework will be capable of assessing deployment aspects such as security, quality of service, and fault tolerance. SPEEDES provides an ideal foundation to support simulation of distributed information systems on a multiprocessor platform. SPEEDES allows the simulation builder to perform optimistic parallel processing on high performance computers, networks of workstations, or combinations of networked computers and HPC platforms.

  14. An object-oriented framework for magnetic-fusion modeling and analysis codes

    SciTech Connect

    Cohen, R H; Yang, T Y Brian

    1999-03-04

    The magnetic-fusion energy (MFE) program, like many other scientific and engineering activities, has a need to efficiently develop complex modeling codes which combine detailed models of components to make an integrated model of a device, as well as a rich supply of legacy code that could provide the component models. There is also growing recognition in many technical fields of the desirability of steerable software: computer programs whose functionality can be changed by the user as it is run. This project had as its goals the development of two key pieces of infrastructure that are needed to combine existing code modules, written mainly in Fortran, into flexible, steerable, object-oriented integrated modeling codes for magnetic- fusion applications. These two pieces are (1) a set of tools to facilitate the interfacing of Fortran code with a steerable object-oriented framework (which we have chosen to be based on PythonlW3, an object-oriented interpreted language), and (2) a skeleton for the integrated modeling code which defines the relationships between the modules. The first of these activities obviously has immediate applicability to a spectrum of projects; the second is more focussed on the MFE application, but may be of value as an example for other applications.

  15. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.

  16. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    NASA Technical Reports Server (NTRS)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  17. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  18. A conceptual, distributed snow redistribution model

    NASA Astrophysics Data System (ADS)

    Frey, S.; Holzmann, H.

    2015-11-01

    When applying conceptual hydrological models using a temperature index approach for snowmelt to high alpine areas often accumulation of snow during several years can be observed. Some of the reasons why these "snow towers" do not exist in nature are vertical and lateral transport processes. While snow transport models have been developed using grid cell sizes of tens to hundreds of square metres and have been applied in several catchments, no model exists using coarser cell sizes of 1 km2, which is a common resolution for meso- and large-scale hydrologic modelling (hundreds to thousands of square kilometres). In this paper we present an approach that uses only gravity and snow density as a proxy for the age of the snow cover and land-use information to redistribute snow in alpine basins. The results are based on the hydrological modelling of the Austrian Inn Basin in Tyrol, Austria, more specifically the Ötztaler Ache catchment, but the findings hold for other tributaries of the river Inn. This transport model is implemented in the distributed rainfall-runoff model COSERO (Continuous Semi-distributed Runoff). The results of both model concepts with and without consideration of lateral snow redistribution are compared against observed discharge and snow-covered areas derived from MODIS satellite images. By means of the snow redistribution concept, snow accumulation over several years can be prevented and the snow depletion curve compared with MODIS (Moderate Resolution Imaging Spectroradiometer) data could be improved, too. In a 7-year period the standard model would lead to snow accumulation of approximately 2900 mm SWE (snow water equivalent) in high elevated regions whereas the updated version of the model does not show accumulation and does also predict discharge with more accuracy leading to a Kling-Gupta efficiency of 0.93 instead of 0.9. A further improvement can be shown in the comparison of MODIS snow cover data and the calculated depletion curve, where

  19. New model for nucleon generalized parton distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2014-01-01

    We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.

  20. The MiRa/THESIS3D-code package for resonator design and modeling of millimeter-wave material processing

    SciTech Connect

    Feher, L.; Link, G.; Thumm, M.

    1996-12-31

    Precise knowledge of millimeter-wave oven properties and design studies have to be obtained by 3D numerical field calculations. A simulation code solving the electromagnetic field problem based on a covariant raytracing scheme (MiRa-Code) has been developed. Time dependent electromagnetic field-material interactions during sintering as well as the heat transfer processes within the samples has been investigated. A numerical code solving the nonlinear heat transfer problem due to millimeter-wave heating has been developed (THESIS3D-Code). For a self consistent sintering simulation, a zip interface between both codes exchanging the time advancing fields and material parameters is implemented. Recent results and progress on calculations of field distributions in large overmoded resonators as well as results on modeling heating of materials with millimeter waves are presented in this paper. The calculations are compared to experiments.

  1. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    NASA Astrophysics Data System (ADS)

    Hamlin, Nathaniel D.; Seyler, Charles E.

    2014-12-01

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm's law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  2. LWR codes capability to address SFR BDBA scenarios: Modeling of the ABCOVE tests

    SciTech Connect

    Herranz, L. E.; Garcia, M.; Morandi, S.

    2012-07-01

    The sound background built-up in LWR source term analysis in case of a severe accident, make it worth to check the capability of LWR safety analysis codes to model accident SFR scenarios, at least in some areas. This paper gives a snapshot of such predictability in the area of aerosol behavior in containment. To do so, the AB-5 test of the ABCOVE program has been modeled with 3 LWR codes: ASTEC, ECART and MELCOR. Through the search of a best estimate scenario and its comparison to data, it is concluded that even in the specific case of in-containment aerosol behavior, some enhancements would be needed in the LWR codes and/or their application, particularly with respect to consideration of particle shape. Nonetheless, much of the modeling presently embodied in LWR codes might be applicable to SFR scenarios. These conclusions should be seen as preliminary as long as comparisons are not extended to more experimental scenarios. (authors)

  3. Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas

    SciTech Connect

    Hamlin, Nathaniel D.; Seyler, Charles E.

    2014-12-15

    We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.

  4. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  5. Parallel Spectral Transform Shallow Water Model: A runtime-tunable parallel benchmark code

    SciTech Connect

    Worley, P.H.; Foster, I.T.

    1994-05-01

    Fairness is an important issue when benchmarking parallel computers using application codes. The best parallel algorithm on one platform may not be the best on another. While it is not feasible to reevaluate parallel algorithms and reimplement large codes whenever new machines become available, it is possible to embed algorithmic options into codes that allow them to be ``tuned`` for a paticular machine without requiring code modifications. In this paper, we describe a code in which such an approach was taken. PSTSWM was developed for evaluating parallel algorithms for the spectral transform method in atmospheric circulation models. Many levels of runtime-selectable algorithmic options are supported. We discuss these options and our evaluation methodology. We also provide empirical results from a number of parallel machines, indicating the importance of tuning for each platform before making a comparison.

  6. Electrical Circuit Simulation Code

    SciTech Connect

    Wix, Steven D.; Waters, Arlon J.; Shirley, David

    2001-08-09

    Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.

  7. A computer code for calculations in the algebraic collective model of the atomic nucleus

    NASA Astrophysics Data System (ADS)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3)  Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  8. A computer code for calculations in the algebraic collective model of the atomic nucleus

    NASA Astrophysics Data System (ADS)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  9. Quantum key distribution networks layer model

    NASA Astrophysics Data System (ADS)

    Wen, Hao; Han, Zheng-fu; Hong, Pei-lin; Guo, Guang-can

    2008-03-01

    Quantum Key Distribution (QKD) networks allow multiple users to generate and share secret quantum keys with unconditional security. Although many schemes of QKD networks have been presented, they are only concentrated on the system realization and physical implementations. For the complete practical quantum network, a succinct theoretic model that systematically describes the working processes from physical schemes to key process protocols, from network topology to key management, and from quantum communication to classical communication is still absent. One would hope that research and experience have shown that there are certain succinct model in the design of communication network. With demonstration of the different QKD links and the four primary types of quantum networks including probability multiplexing, wavelength multiplexing, time multiplexing and quantum multiplexing, we suggest a layer model for QKD networks which will be compatible with different implementations and protocols. We divide it into four main layers by their functional independency while defining each layer's services and responsibilities in detail, orderly named quantum links layer, quantum networks layer, quantum key distribution protocols process layer, and keys management layer. It will be helpful for the systematic design and construction of real QKD networks.

  10. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  11. Recommended requirements to code officials for solar heating, cooling, and hot water systems. Model document for code officials on solar heating and cooling of buildings

    SciTech Connect

    1980-06-01

    These recommended requirements include provisions for electrical, building, mechanical, and plumbing installations for active and passive solar energy systems used for space or process heating and cooling, and domestic water heating. The provisions in these recommended requirements are intended to be used in conjunction with the existing building codes in each jurisdiction. Where a solar relevant provision is adequately covered in an existing model code, the section is referenced in the Appendix. Where a provision has been drafted because there is no counterpart in the existing model code, it is found in the body of these recommended requirements. Commentaries are included in the text explaining the coverage and intent of present model code requirements and suggesting alternatives that may, at the discretion of the building official, be considered as providing reasonable protection to the public health and safety. Also included is an Appendix which is divided into a model code cross reference section and a reference standards section. The model code cross references are a compilation of the sections in the text and their equivalent requirements in the applicable model codes. (MHR)

  12. Distributed Slip Model for Simulating Virtual Earthquakes

    NASA Astrophysics Data System (ADS)

    Shani-Kadmiel, S.; Tsesarsky, M.; Gvirtzman, Z.

    2014-12-01

    We develop a physics based, generic finite fault source, which we call the Distributed Slip Model (DSM) for simulating large virtual earthquakes. This task is a necessary step towards ground motion prediction in earthquake-prone areas with limited instrumental coverage. A reliable ground motion prediction based on virtual earthquakes must account for site, path, and source effects. Assessment of site effect mainly depends on near-surface material properties which are relatively well constrained, using geotechnical site data and borehole measurements. Assessment of path effect depends on the deeper geological structure, which is also typically known to an acceptable resolution. Contrarily to these two effects, which remain constant for a given area of interest, the earthquake rupture process and geometry varies from one earthquake to the other. In this study we focus on a finite fault source representation which is both generic and physics-based, for simulating large earthquakes where limited knowledge is available. Thirteen geometric and kinematic parameters are used to describe the smooth "pseudo-Gaussian" slip distribution, such that slip decays from a point of peak slip within an elliptical rupture patch to zero at the borders of the patch. Radiation pattern and spectral charectaristics of our DSM are compared to those of commonly used finite fault models, i.e., the classical Haskell's Model (HM) and the modified HM with Radial Rupture Propagation (HM-RRP) and the Point Source Model (PSM). Ground motion prediction based on our DSM benefits from the symmetry of the PSM and the directivity of the HM while overcoming inadequacy for modeling large earthquakes of the former and the non-physical uniform slip of the latter.

  13. Code modernization and modularization of APEX and SWAT watershed simulation models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  14. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating

  15. Inverse distributed hydrological modelling of alpine catchments

    NASA Astrophysics Data System (ADS)

    Kunstmann, H.; Krause, J.; Mayr, S.

    2005-12-01

    Even in physically based distributed hydrological models, various remaining parameters must be estimated for each sub-catchment. This can involve tremendous effort, especially when the number of sub-catchments is large and the applied hydrological model is computationally expensive. Automatic parameter estimation tools can significantly facilitate the calibration process. Hence, we combined the nonlinear parameter estimation tool PEST with the distributed hydrological model WaSiM. PEST is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. WaSiM is a fully distributed hydrological model using physically based algorithms for most of the process descriptions. WaSiM was applied to the alpine/prealpine Ammer River catchment (southern Germany, 710 km2) in a 100×100 m2 horizontal resolution. The catchment is heterogeneous in terms of geology, pedology and land use and shows a complex orography (the difference of elevation is around 1600 m). Using the developed PEST-WaSiM interface, the hydrological model was calibrated by comparing simulated and observed runoff at eight gauges for the hydrologic year 1997 and validated for the hydrologic year 1993. For each sub-catchment four parameters had to be calibrated: the recession constants of direct runoff and interflow, the drainage density, and the hydraulic conductivity of the uppermost aquifer. Additionally, five snowmelt specific parameters were adjusted for the entire catchment. Altogether, 37 parameters had to be calibrated. Additional a priori information (e.g. from flood hydrograph analysis) narrowed the parameter space of the solutions and improved the non-uniqueness of the fitted values. A reasonable quality of fit was achieved. Discrepancies between modelled and observed runoff were also due to the small number of meteorological stations and corresponding interpolation artefacts in the orographically complex terrain. A detailed covariance analysis was performed

  16. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  17. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  18. User's guide for waste tank corrosion data model code

    SciTech Connect

    Mackey, D.B.; Divine, J.R.

    1986-12-01

    Corrosion tests were conducted on A-516 and A-537 carbon steel in simulated Double Shell Slurry, Future PUREX, and Hanford Facilities wastes. The corrosion rate data, gathered between 25 and 180/sup 0/C, were statistically ''modeled'' for each waste; a fourth model was developed that utilized the combined data. The report briefly describes the modeling procedure and details on how to access information through a computerized data system. Copies of the report and operating information may be obtained from the author (DB Mackey) at 509-376-9844 of FTS 444-9844.

  19. Applications of species distribution modeling to paleobiology

    NASA Astrophysics Data System (ADS)

    Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine A.; Nógues-Bravo, David; Normand, Signe

    2011-10-01

    Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview of applications of SDM to paleobiology, outlining the methodology, reviewing SDM-based studies to paleobiology or at the interface of paleo- and neobiology, discussing assumptions and uncertainties as well as how to handle them, and providing a synthesis and outlook. Key methodological issues for SDM applications to paleobiology include predictor variables (types and properties; special emphasis is given to paleoclimate), model validation (particularly important given the emphasis on cross-temporal predictions in paleobiological applications), and the integration of SDM and genetics approaches. Over the last few years the number of studies using SDM to address paleobiology-related questions has increased considerably. While some of these studies only use SDM (23%), most combine them with genetically inferred patterns (49%), paleoecological records (22%), or both (6%). A large number of SDM-based studies have addressed the role of Pleistocene glacial refugia in biogeography and evolution, especially in Europe, but also in many other regions. SDM-based approaches are also beginning to contribute to a suite of other research questions, such as historical constraints on current distributions and diversity patterns, the end-Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing

  20. DANA: distributed numerical and adaptive modelling framework.

    PubMed

    Rougier, Nicolas P; Fix, Jérémy

    2012-01-01

    DANA is a python framework ( http://dana.loria.fr ) whose computational paradigm is grounded on the notion of a unit that is essentially a set of time dependent values varying under the influence of other units via adaptive weighted connections. The evolution of a unit's value are defined by a set of differential equations expressed in standard mathematical notation which greatly ease their definition. The units are organized into groups that form a model. Each unit can be connected to any other unit (including itself) using a weighted connection. The DANA framework offers a set of core objects needed to design and run such models. The modeler only has to define the equations of a unit as well as the equations governing the training of the connections. The simulation is completely transparent to the modeler and is handled by DANA. This allows DANA to be used for a wide range of numerical and distributed models as long as they fit the proposed framework (e.g. cellular automata, reaction-diffusion system, decentralized neural networks, recurrent neural networks, kernel-based image processing, etc.).

  1. Comparing the line broadened quasilinear model to Vlasov code

    SciTech Connect

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-15

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  2. Comparing the line broadened quasilinear model to Vlasov code

    NASA Astrophysics Data System (ADS)

    Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.

    2014-03-01

    The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.

  3. Engine structures modeling software system: Computer code. User's manual

    NASA Technical Reports Server (NTRS)

    1992-01-01

    ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.

  4. New higher-order Godunov code for modelling performance of two-stage light gas guns

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.; Miller, R. J.

    1995-01-01

    A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

  5. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  6. Recommendations for computer modeling codes to support the UMTRA groundwater restoration project

    SciTech Connect

    Tucker, M.D.; Khan, M.A.

    1996-04-01

    The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

  7. MUFITS Code for Modeling Geological Storage of Carbon Dioxide at Sub- and Supercritical Conditions

    NASA Astrophysics Data System (ADS)

    Afanasyev, A.

    2012-12-01

    Two-phase models are widely used for simulation of CO2 storage in saline aquifers. These models support gaseous phase mainly saturated with CO2 and liquid phase mainly saturated with H2O (e.g. TOUGH2 code). The models can be applied to analysis of CO2 storage only in relatively deeply-buried reservoirs where pressure exceeds CO2 critical pressure. At these supercritical reservoir conditions only one supercritical CO2-rich phase appears in aquifer due to CO2 injection. In shallow aquifers where reservoir pressure is less than the critical pressure CO2 can split in two different liquid-like and gas-like phases (e.g. Spycher et al., 2003). Thus a region of three-phase flow of water, liquid and gaseous CO2 can appear near the CO2 injection point. Today there is no widely used and generally accepted numerical model capable of the three-phase flows with two CO2-rich phases. In this work we propose a new hydrodynamic simulator MUFITS (Multiphase Filtration Transport Simulator) for multiphase compositional modeling of CO2-H2O mixture flows in porous media at conditions of interest for carbon sequestration. The simulator is effective both for supercritical flows in a wide range of pressure and temperature and for subcritical three-phase flows of water, liquid CO2 and gaseous CO2 in shallow reservoirs. The distinctive feature of the proposed code lies in the methodology for mixture properties determination. Transport equations and Darcy correlation are solved together with calculation of the entropy maximum that is reached in thermodynamic equilibrium and determines the mixture composition. To define and solve the problem only one function - mixture thermodynamic potential - is required. The potential is determined using a three-parametric generalization of Peng-Robinson equation of state fitted to experimental data (Todheide, Takenouchi, Altunin etc.). We apply MUFITS to simple 1D and 2D test problems of CO2 injection in shallow reservoirs subjected to phase changes between

  8. Sparse distributed memory and related models

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1992-01-01

    Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.

  9. How can model comparison help improving species distribution models?

    PubMed

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779

  10. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    SciTech Connect

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  11. Alternative conceptual models and codes for unsaturated flow in fractured tuff: Preliminary assessments for GWTT-95

    SciTech Connect

    Ho, C.K.; Altman, S.J.; Arnold, B.W.

    1995-09-01

    Groundwater travel time (GWTT) calculations will play an important role in addressing site-suitability criteria for the potential high-level nuclear waste repository at Yucca Mountain,Nevada. In support of these calculations, Preliminary assessments of the candidate codes and models are presented in this report. A series of benchmark studies have been designed to address important aspects of modeling flow through fractured media representative of flow at Yucca Mountain. Three codes (DUAL, FEHMN, and TOUGH 2) are compared in these benchmark studies. DUAL is a single-phase, isothermal, two-dimensional flow simulator based on the dual mixed finite element method. FEHMN is a nonisothermal, multiphase, multidimensional simulator based primarily on the finite element method. TOUGH2 is anon isothermal, multiphase, multidimensional simulator based on the integral finite difference method. Alternative conceptual models of fracture flow consisting of the equivalent continuum model (ECM) and the dual permeability (DK) model are used in the different codes.

  12. ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual

    SciTech Connect

    Smith, A.B.; Lawson, R.D.

    1998-06-01

    The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.

  13. Modeling Distributed Electricity Generation in the NEMS Buildings Models

    EIA Publications

    2011-01-01

    This paper presents the modeling methodology, projected market penetration, and impact of distributed generation with respect to offsetting future electricity needs and carbon dioxide emissions in the residential and commercial buildings sector in the Annual Energy Outlook 2000 (AEO2000) reference case.

  14. Description of codes and models to be used in risk assessment

    SciTech Connect

    Not Available

    1991-09-01

    Human health and environmental risk assessments will be performed as part of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) remedial investigation/feasibility study (RI/FS) activities at the Hanford Site. Analytical and computer encoded numerical models are commonly used during both the remedial investigation (RI) and feasibility study (FS) to predict or estimate the concentration of contaminants at the point of exposure to humans and/or the environment. This document has been prepared to identify the computer codes that will be used in support of RI/FS human health and environmental risk assessments at the Hanford Site. In addition to the CERCLA RI/FS process, it is recommended that these computer codes be used when fate and transport analyses is required for other activities. Additional computer codes may be used for other purposes (e.g., design of tracer tests, location of observation wells, etc.). This document provides guidance for unit managers in charge of RI/FS activities. Use of the same computer codes for all analytical activities at the Hanford Site will promote consistency, reduce the effort required to develop, validate, and implement models to simulate Hanford Site conditions, and expedite regulatory review. The discussion provides a description of how models will likely be developed and utilized at the Hanford Site. It is intended to summarize previous environmental-related modeling at the Hanford Site and provide background for future model development. The modeling capabilities that are desirable for the Hanford Site and the codes that were evaluated. The recommendations include the codes proposed to support future risk assessment modeling at the Hanford Site, and provides the rational for the codes selected. 27 refs., 3 figs., 1 tab.

  15. Scrape-off layer modeling using coupled plasma and neutral transport codes

    SciTech Connect

    Stotler, D.P.; Coster, D.P.; Ehrdardt, A.B.; Karney, C.F.F.; Petravic, M.; Braams, B.J.

    1992-05-01

    An effort is made to refine the neutral transport model used in the B2 edge plasma code by coupling it to the DEGAS Monte Carlo code. Results are discussed for a simulation of a high recycling divertor. It appears that on the order of 100 iterations between the two codes are required to achieve a converged solution. However, the amount of computer time used in the DEGAS simulations is large, making complete runs impractical for design purposes. On the other hand, the differences in the resulting plasma parameters when compared to the B2 analytic neutrals model indicate that it would be worthwhile to explore techniques for speeding up the control system of codes.

  16. A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model

    ERIC Educational Resources Information Center

    Sadoski, Mark; McTigue, Erin M.; Paivio, Allan

    2012-01-01

    In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…

  17. Information-Theoretic Modeling of Trichromacy Coding of Light Spectrum

    NASA Astrophysics Data System (ADS)

    Benoit, Landry; Belin, Étienne; Rousseau, David; Chapeau-Blondeau, François

    2014-07-01

    Trichromacy is the representation of a light spectrum by three scalar coordinates. Such representation is universally implemented by the human visual system and by RGB (Red Green Blue) cameras. We propose here an informational model for trichromacy. Based on a statistical analysis of the dynamics of individual photons, the model demonstrates a possibility for describing trichromacy as an information channel, for which the input-output mutual information can be computed to serve as a measure of performance. The capabilities and significance of the informational model are illustrated and motivated in various situations. The model especially enables an assessment of the influence of the spectral sensitivities of the three types of photodetectors realizing the trichromatic representation. It provides a criterion to optimize possibly adjustable parameters of the spectral sensitivities such as their center wavelength, spectral width or magnitude. The model shows, for instance, the usefulness of some overlap with smooth graded spectral sensitivities, as observed for instance in the human retina. The approach also, starting from hyperspectral images with high spectral resolution measured in the laboratory, can be used to devise low-cost trichromatic imaging systems optimized for observation of specific spectral signatures. This is illustrated with an example from plant science, and demonstrates a potential of application especially to life sciences. The approach particularizes connections between physics, biophysics and information theory.

  18. Spatial and temporal distribution of visual information coding in lateral prefrontal cortex

    PubMed Central

    Kadohisa, Mikiko; Kusunoki, Makoto; Petrov, Philippe; Sigala, Natasha; Buckley, Mark J; Gaffan, David; Duncan, John

    2015-01-01

    Prefrontal neurons code many kinds of behaviourally relevant visual information. In behaving monkeys, we used a cued target detection task to address coding of objects, behavioural categories and spatial locations, examining the temporal evolution of neural activity across dorsal and ventral regions of the lateral prefrontal cortex (encompassing parts of areas 9, 46, 45A and 8A), and across the two cerebral hemispheres. Within each hemisphere there was little evidence for regional specialisation, with neurons in dorsal and ventral regions showing closely similar patterns of selectivity for objects, categories and locations. For a stimulus in either visual field, however, there was a strong and temporally specific difference in response in the two cerebral hemispheres. In the first part of the visual response (50–250 ms from stimulus onset), processing in each hemisphere was largely restricted to contralateral stimuli, with strong responses to such stimuli, and selectivity for both object and category. Later (300–500 ms), responses to ipsilateral stimuli also appeared, many cells now responding more strongly to ipsilateral than to contralateral stimuli, and many showing selectivity for category. Activity on error trials showed that late activity in both hemispheres reflected the animal's final decision. As information is processed towards a behavioural decision, its encoding spreads to encompass large, bilateral regions of prefrontal cortex. PMID:25307044

  19. Distributed image coding for digital image recovery from the print-scan channel.

    PubMed

    Samadani, Ramin; Mukherjee, Debargha

    2010-03-01

    A printed digital photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a method for approximating the original digital image by combining a scan of the printed photograph with digital auxiliary information kept together with the print. We formulate and solve the approximation problem using a Wyner-Ziv coding framework. During encoding, the Wyner-Ziv auxiliary information consists of a small amount of digital data composed of a number of sampled luminance pixel blocks and a number of sampled color pixel values to enable subsequent accurate registration and color-reproduction during decoding. The registration and color information is augmented by an additional amount of digital data encoded using Wyner-Ziv coding techniques that recovers residual errors and lost high spatial frequencies. The decoding process consists of scanning the printed photograph, together with a two step decoding process. The first decoding step, using the registration and color auxiliary information, generates a side-information image which registers and color corrects the scanned image. The second decoding step uses the additional Wyner-Ziv layer together with the side-information image to provide a closer approximation of the original, reducing residual errors and restoring the lost high spatial frequencies. The experimental results confirm the reduced digital storage needs when the scanned print assists in the digital reconstruction.

  20. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  1. Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples

    SciTech Connect

    Lorence, L.J. Jr.; Beutler, D.E.

    1997-09-01

    This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.

  2. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    SciTech Connect

    Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  3. Implementation of Lumped Plasticity Models and Developments in an Object Oriented Nonlinear Finite Element Code

    NASA Astrophysics Data System (ADS)

    Segura, Christopher L.

    Numerical simulation tools capable of modeling nonlinear material and geometric behavior are important to structural engineers concerned with approximating the strength and deformation capacity of a structure. While structures are typically designed to behave linear elastic when subjected to building code design loads, exceedance of the linear elastic range is often an important consideration, especially with regards to structural response during hazard level events (i.e. earthquakes, hurricanes, floods), where collapse prevention is the primary goal. This thesis addresses developments made to Mercury, a nonlinear finite element program developed in MATLAB for numerical simulation and in C++ for real time hybrid simulation. Developments include the addition of three new constitutive models to extend Mercury's lumped plasticity modeling capabilities, a constitutive driver tool for testing and implementing Mercury constitutive models, and Mercury pre and post-processing tools. Mercury has been developed as a tool for transient analysis of distributed plasticity models, offering accurate nonlinear results on the material level, element level, and structural level. When only structural level response is desired (collapse prevention), obtaining material level results leads to unnecessarily lengthy computational time. To address this issue in Mercury, lumped plasticity capabilities are developed by implementing two lumped plasticity flexural response constitutive models and a column shear failure constitutive model. The models are chosen for implementation to address two critical issues evident in structural testing: column shear failure and strength and stiffness degradation under reverse cyclic loading. These tools make it possible to model post-peak behavior, capture strength and stiffness degradation, and predict global collapse. During the implementation process, a need was identified to create a simple program, separate from Mercury, to simplify the process of

  4. Improvements of the Radiation Code "MstrnX" in AORI/NIES/JAMSTEC Models

    NASA Astrophysics Data System (ADS)

    Sekiguchi, M.; Suzuki, K.; Takemura, T.; Watanabe, M.; Ogura, T.

    2015-12-01

    There is a large demand for an accurate yet rapid radiation transfer scheme accurate for general climate models. The broadband radiative transfer code "mstrnX", ,which was developed by Atmosphere and Ocean Research Institute (AORI) and was implemented in several global and regional climate models cooperatively developed in the Japanese research community, for example, MIROC (the Model for Interdisciplinary Research on Climate) [Watanabe et al., 2010], NICAM (Non-hydrostatic Icosahedral Atmospheric Model) [Satoh et al, 2008], and CReSS (Cloud Resolving Storm Simulator) [Tsuboki and Sakakibara, 2002]. In this study, we improve the gas absorption process and the scattering process of ice particles. For update of gas absorption process, the absorption line database is replaced by the latest versions of the Harvard-Smithsonian Center, HITRAN2012. An optimization method is adopted in mstrnX to decrease the number of integration points for the wavenumber integration using the correlated k-distribution method and to increase the computational efficiency in each band. The integration points and weights of the correlated k-distribution are optimized for accurate calculation of the heating rate up to altitude of 70 km. For this purpose we adopted a new non-linear optimization method of the correlated k-distribution and studied an optimal initial condition and the cost function for the non-linear optimization. It is known that mstrnX has a considerable bias in case of quadrapled carbon dioxide concentrations [Pincus et al., 2015], however, the bias is decreased by this improvement. For update of scattering process of ice particles, we adopt a solid column as an ice crystal habit [Yang et al., 2013]. The single scattering properties are calculated and tabulated in advance. The size parameter of this table is ranged from 0.1 to 1000 in mstrnX, we expand the maximum to 50000 in order to correspond to large particles, like fog and rain drop. Those update will be introduced to

  5. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

    SciTech Connect

    Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

  6. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  7. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  8. The Nuremberg Code subverts human health and safety by requiring animal modeling

    PubMed Central

    2012-01-01

    Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234

  9. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    SciTech Connect

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  10. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  11. APA's model law: a commitment code by and for psychiatrists.

    PubMed

    Wexler, D B

    1985-09-01

    The author argues that the APA model law is seriously flawed because it lacks sufficient mechanisms for questioning the judgment of psychiatrists throughout the commitment process and for ensuring the best disposition of patients. By failing to provide for independent screening of commitment petitions, to mandate multiple psychiatric evaluations of respondents, to provide indigent respondents a free psychiatric examination to help them prepare for the commitment hearing, and to address the shortcomings of legal advocacy, the model law sets the stage for improper or unwarranted commitments. In addition, the law circumvents the rights of patients admitted on emergency status to refuse treatment throughout the entire evaluation period, which can last up to 14 days.

  12. Modeling Code Is Helping Cleveland Develop New Products

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Master Builders, Inc., is a 350-person company in Cleveland, Ohio, that develops and markets specialty chemicals for the construction industry. Developing new products involves creating many potential samples and running numerous tests to characterize the samples' performance. Company engineers enlisted NASA's help to replace cumbersome physical testing with computer modeling of the samples' behavior. Since the NASA Lewis Research Center's Structures Division develops mathematical models and associated computation tools to analyze the deformation and failure of composite materials, its researchers began a two-phase effort to modify Lewis' Integrated Composite Analyzer (ICAN) software for Master Builders' use. Phase I has been completed, and Master Builders is pleased with the results. The company is now working to begin implementation of Phase II.

  13. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    NASA Astrophysics Data System (ADS)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  14. Potential capabilities of Reynolds stress turbulence model in the COMMIX-RSM code

    NASA Technical Reports Server (NTRS)

    Chang, F. C.; Bottoni, M.

    1994-01-01

    A Reynolds stress turbulence model has been implemented in the COMMIX code, together with transport equations describing turbulent heat fluxes, variance of temperature fluctuations, and dissipation of turbulence kinetic energy. The model has been verified partially by simulating homogeneous turbulent shear flow, and stable and unstable stratified shear flows with strong buoyancy-suppressing or enhancing turbulence. This article outlines the model, explains the verifications performed thus far, and discusses potential applications of the COMMIX-RSM code in several domains, including, but not limited to, analysis of thermal striping in engineering systems, simulation of turbulence in combustors, and predictions of bubbly and particulate flows.

  15. Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.

    PubMed

    Wallace, Rodrick

    2015-06-01

    A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems. PMID:25185747

  16. Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.

    PubMed

    Wallace, Rodrick

    2015-06-01

    A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.

  17. Open Source assimilation tool for distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Richard, Julien; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2013-04-01

    An advanced GIS data assimilation interface is a requisite to obtain a distributed hydrological model that is both transportable from catchment to catchment and is easily adaptable to data resolution. This tool is achieved for the cartographic data as well as the linked information data. In the case of the Multi-Hydro-Version2 model (A. Giangola-Murzyn et al. 2012), several types of information are distributed on a regular grid. The grid cell size has to be chosen by the user and each cell has to be filled up with information. In order to be the most realistic as possible, the Multi-Hydro model takes into account several data. For that, the assimilation tool (MH-AssimTool) has to be able to import all these different information. The needed flexibility of the studied area and grid size requires that the GIS interface must be easy to take in hand and also practical. The solution of a main window for the geographical visualisation and hierarchical menus coupled with checkboxes was chosen. For example, the geographical information, like the topography or the land use can be visualized in the main window. For the other data, like the soil conductivity, the geology or the initial moisture, the information is demanded through several pop-up windows. Once the needed information imported, MH-AssimTool prepares automatically the data. For the topography data conversion, if the resolution is too small, an interpolation is done during the processing. As a result, all the converted data is in a good resolution for the modelling. As Multi-Hydro, MH-AssimTool is open source. It's coded in Visual Basic language coupled with a GIS library. The interface is built in such a way then it can be used by a non specialist. We will illustrate the efficiency of the tool with some case studies of peri-urban catchments of widely different sizes and characteristics. We will also explain some parts of the coding of the interface.

  18. Transmission Probability Code System for Calculating Neutron Flux Distributions in Hexagonal Geometry.

    1991-01-25

    Version 00 TPHEX calculates the multigroup neutron flux distribution in an assembly of hexagonal cells using a transmission probability (interface current) method. It is primarily intended for calculations on hexagonal LWR fuel assemblies but can be used for other purposes subject to the qualifications mentioned in Restrictions/Limitations.

  19. A Multimodal Approach to Coding Discourse: Collaboration, Distributed Cognition, and Geometric Reasoning

    ERIC Educational Resources Information Center

    Evans, Michael A.; Feenstra, Eliot; Ryon, Emily; McNeill, David

    2011-01-01

    Our research aims to identify children's communicative strategies when faced with the task of solving a geometric puzzle in CSCL contexts. We investigated how to identify and trace "distributed cognition" in problem-solving interactions based on discursive cohesion to objects, participants, and prior discursive content, and geometric and…

  20. The Role of Coding Time in Estimating and Interpreting Growth Curve Models.

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.

    2004-01-01

    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…

  1. Anisotropic distributions in a multiphase transport model

    NASA Astrophysics Data System (ADS)

    Zhou, You; Xiao, Kai; Feng, Zhao; Liu, Feng; Snellings, Raimond

    2016-03-01

    With a multiphase transport (AMPT) model we investigate the relation between the magnitude, fluctuations, and correlations of the initial state spatial anisotropy ɛn and the final state anisotropic flow coefficients vn in Au+Au collisions at √{s NN}=200 GeV. It is found that the relative eccentricity fluctuations in AMPT account for the observed elliptic flow fluctuations, both are in agreement with the elliptic flow fluctuation measurements from the STAR collaboration. In addition, the studies based on two- and multiparticle correlations and event-by-event distributions of the anisotropies suggest that the elliptic-power function is a promising candidate of the underlying probability density function of the event-by-event distributions of ɛn as well as vn. Furthermore, the correlations between different order symmetry planes and harmonics in the initial coordinate space and final state momentum space are presented. Nonzero values of these correlations have been observed. The comparison between our calculations and data will, in the future, shed new insight into the nature of the fluctuations of the quark-gluon plasma produced in heavy ion collisions.

  2. Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.

  3. Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I

    NASA Technical Reports Server (NTRS)

    Wey, Thomas Changju; Liu, Nan-suey

    2011-01-01

    This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.

  4. Distributed Energy Resources Market Diffusion Model

    SciTech Connect

    Maribu, Karl Magnus; Firestone, Ryan; Marnay, Chris; Siddiqui,Afzal S.

    2006-06-16

    Distributed generation (DG) technologies, such as gas-fired reciprocating engines and microturbines, have been found to be economically beneficial in meeting commercial-sector electrical, heating, and cooling loads. Even though the electric-only efficiency of DG is lower than that offered by traditional central stations, combined heat and power (CHP) applications using recovered heat can make the overall system energy efficiency of distributed energy resources (DER) greater. From a policy perspective, however, it would be useful to have good estimates of penetration rates of DER under various economic and regulatory scenarios. In order to examine the extent to which DER systems may be adopted at a national level, we model the diffusion of DER in the US commercial building sector under different technical research and technology outreach scenarios. In this context, technology market diffusion is assumed to depend on the system's economic attractiveness and the developer's knowledge about the technology. The latter can be spread both by word-of-mouth and by public outreach programs. To account for regional differences in energy markets and climates, as well as the economic potential for different building types, optimal DER systems are found for several building types and regions. Technology diffusion is then predicted via two scenarios: a baseline scenario and a program scenario, in which more research improves DER performance and stronger technology outreach programs increase DER knowledge. The results depict a large and diverse market where both optimal installed capacity and profitability vary significantly across regions and building types. According to the technology diffusion model, the West region will take the lead in DER installations mainly due to high electricity prices, followed by a later adoption in the Northeast and Midwest regions. Since the DER market is in an early stage, both technology research and outreach programs have the potential to increase

  5. Analysis of similarity/dissimilarity of DNA sequences based on convolutional code model.

    PubMed

    Liu, Xiao; Tian, Feng Chun; Wang, Shi Yuan

    2010-02-01

    Based on the convolutional code model of error-correction coding theory, we propose an approach to characterize and compare DNA sequences with consideration of the effect of codon context. We construct an 8-component vector whose components are the normalized leading eigenvalues of the L/L and M/M matrices associated with the original DNA sequences and the transformed sequences. The utility of our approach is illustrated by the examination of the similarities/dissimilarities among the coding sequences of the first exon of beta-globin gene of 11 species, and the efficiency of error-correction coding theory in analysis of similarity/dissimilarity of DNA sequences is represented.

  6. Implementation of an anomalous radial transport model for continuum kinetic edge codes

    NASA Astrophysics Data System (ADS)

    Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.

    2007-11-01

    Radial plasma transport in magnetic fusion devices is often dominated by plasma turbulence compared to neoclassical collisional transport. Continuum kinetic edge codes [such as the (2d,2v) transport version of TEMPEST and also EGK] compute the collisional transport directly, but there is a need to model the anomalous transport from turbulence for long-time transport simulations. Such a model is presented and results are shown for its implementation in the TEMPEST gyrokinetic edge code. The model includes velocity-dependent convection and diffusion coefficients expressed as a Hermite polynominals in velocity. The specification of the Hermite coefficients can be set, e.g., by specifying the ratio of particle and energy transport as in fluid transport codes. The anomalous transport terms preserve the property of no particle flux into unphysical regions of velocity space. TEMPEST simulations are presented showing the separate control of particle and energy anomalous transport, and comparisons are made with neoclassical transport also included.

  7. Comparison of current state residential energy codes with the 1992 model energy code for one- and two-family dwellings; 1994

    SciTech Connect

    Klevgard, L.A.; Taylor, Z.T.; Lucas, R.G.

    1995-01-01

    This report is one in a series of documents describing research activities in support of the US Department of Energy (DOE) Building Energy Codes Program. The Pacific Northwest Laboratory (PNL) leads the program for DOE. The goal of the program is to develop and support the adopting, implementation, and enforcement of Federal, State, and Local energy codes for new buildings. The program approach to meeting the goal is to initiate and manage individual research and standards and guidelines development efforts that are planned and conducted in cooperation with representatives from throughout the buildings community. Projects under way involve practicing architects and engineers, professional societies and code organizations, industry representatives, and researchers from the private sector and national laboratories. Research results and technical justifications for standards criteria are provided to standards development and model code organizations and to Federal, State, and local jurisdictions as a basis to update their codes and standards. This effort helps to ensure that building standards incorporate the latest research results to achieve maximum energy savings in new buildings, yet remain responsive to the needs of the affected professions, organizations, and jurisdictions. Also supported are the implementation, deployment, and use of energy-efficient codes and standards. This report documents findings from an analysis conducted by PNL of the State`s building codes to determine if the codes meet or exceed the 1992 MEC energy efficiency requirements (CABO 1992a).

  8. A distributed clients/distributed servers model for STARCAT

    NASA Technical Reports Server (NTRS)

    Pirenne, B.; Albrecht, M. A.; Durand, D.; Gaudet, S.

    1992-01-01

    STARCAT, the Space Telescope ARchive and CATalogue user interface has been along for a number of years already. During this time it has been enhanced and augmented in a number of different fields. This time, we would like to dwell on a new capability allowing geographically distributed user interfaces to connect to geographically distributed data servers. This new concept permits users anywhere on the internet running STARCAT on their local hardware to access e.g., whichever of the 3 existing HST archive sites is available, or get information on the CFHT archive through a transparent connection to the CADC in BC or to get the La Silla weather by connecting to the ESO database in Munich during the same session. Similarly PreView (or quick look) images and spectra will also flow directly to the user from wherever it is available. Moving towards an 'X'-based STARCAT is another goal being pursued: a graphic/image server and a help/doc server are currently being added to it. They should further enhance the user independence and access transparency.

  9. Diverse and pervasive subcellular distributions for both coding and long noncoding RNAs

    PubMed Central

    Wilk, Ronit; Hu, Jack; Blotsky, Dmitry; Krause, Henry M.

    2016-01-01

    In a previous analysis of 2300 mRNAs via whole-mount fluorescent in situ hybridization in cellularizing Drosophila embryos, we found that 70% of the transcripts exhibited some form of subcellular localization. To see whether this prevalence is unique to early Drosophila embryos, we examined ∼8000 transcripts over the full course of embryogenesis and ∼800 transcripts in late third instar larval tissues. The numbers and varieties of new subcellular localization patterns are both striking and revealing. In the much larger cells of the third instar larva, virtually all transcripts observed showed subcellular localization in at least one tissue. We also examined the prevalence and variety of localization mechanisms for >100 long noncoding RNAs. All of these were also found to be expressed and subcellularly localized. Thus, subcellular RNA localization appears to be the norm rather than the exception for both coding and noncoding RNAs. These results, which have been annotated and made available on a recompiled database, provide a rich and unique resource for functional gene analyses, some examples of which are provided. PMID:26944682

  10. Distribution and chemical coding of intramural neurons in the porcine ileum during proliferative enteropathy.

    PubMed

    Pidsudko, Z; Kaleczyc, J; Wasowicz, K; Sienkiewicz, W; Majewski, M; Zajac, W; Lakomy, M

    2008-01-01

    Enteric neurons are highly adaptive in their response to various pathological processes including inflammation, so the aim of this study was to describe the chemical coding of neurons in the ileal intramural ganglia in porcine proliferative enteropathy (PPE). Accordingly, juvenile Large White Polish pigs with clinically diagnosed Lawsonia intracellularis infection (PPE; n=3) and a group of uninfected controls (C; n=3) were studied. Ileal tissue from each animal was processed for dual-labelling immunofluorescence using antiserum specific for protein gene product 9.5 (PGP 9.5) in combination with antiserum to one of: vasoactive intestinal polypeptide (VIP), substance P (SP), calcitonin gene-related peptide (CGRP), somatostatin (SOM), neuropeptide Y (NPY) or galanin (GAL). In infected pigs, enteric neurons were found in ganglia located within three intramural plexuses: inner submucosal (ISP), outer submucosal (OSP) and myenteric (MP). Immunofluorescence labelling revealed increases in the number of neurons containing GAL, SOM, VIP and CGRP in pigs with PPE. Neuropeptides may therefore have an important role in the function of porcine enteric local nerve circuits under pathological conditions, when the nervous system is stressed, challenged or afflicted by disease such as PPE. However, further studies are required to determine the exact physiological relevance of the observed adaptive changes. PMID:18061202

  11. Numeral series hidden in the distribution of atomic mass of amino acids to codon domains in the genetic code.

    PubMed

    Wohlin, Åsa

    2015-03-21

    The distribution of codons in the nearly universal genetic code is a long discussed issue. At the atomic level, the numeral series 2x(2) (x=5-0) lies behind electron shells and orbitals. Numeral series appear in formulas for spectral lines of hydrogen. The question here was if some similar scheme could be found in the genetic code. A table of 24 codons was constructed (synonyms counted as one) for 20 amino acids, four of which have two different codons. An atomic mass analysis was performed, built on common isotopes. It was found that a numeral series 5 to 0 with exponent 2/3 times 10(2) revealed detailed congruency with codon-grouped amino acid side-chains, simultaneously with the division on atom kinds, further with main 3rd base groups, backbone chains and with codon-grouped amino acids in relation to their origin from glycolysis or the citrate cycle. Hence, it is proposed that this series in a dynamic way may have guided the selection of amino acids into codon domains. Series with simpler exponents also showed noteworthy correlations with the atomic mass distribution on main codon domains; especially the 2x(2)-series times a factor 16 appeared as a conceivable underlying level, both for the atomic mass and charge distribution. Furthermore, it was found that atomic mass transformations between numeral systems, possibly interpretable as dimension degree steps, connected the atomic mass of codon bases with codon-grouped amino acids and with the exponent 2/3-series in several astonishing ways. Thus, it is suggested that they may be part of a deeper reference system.

  12. Numeral series hidden in the distribution of atomic mass of amino acids to codon domains in the genetic code.

    PubMed

    Wohlin, Åsa

    2015-03-21

    The distribution of codons in the nearly universal genetic code is a long discussed issue. At the atomic level, the numeral series 2x(2) (x=5-0) lies behind electron shells and orbitals. Numeral series appear in formulas for spectral lines of hydrogen. The question here was if some similar scheme could be found in the genetic code. A table of 24 codons was constructed (synonyms counted as one) for 20 amino acids, four of which have two different codons. An atomic mass analysis was performed, built on common isotopes. It was found that a numeral series 5 to 0 with exponent 2/3 times 10(2) revealed detailed congruency with codon-grouped amino acid side-chains, simultaneously with the division on atom kinds, further with main 3rd base groups, backbone chains and with codon-grouped amino acids in relation to their origin from glycolysis or the citrate cycle. Hence, it is proposed that this series in a dynamic way may have guided the selection of amino acids into codon domains. Series with simpler exponents also showed noteworthy correlations with the atomic mass distribution on main codon domains; especially the 2x(2)-series times a factor 16 appeared as a conceivable underlying level, both for the atomic mass and charge distribution. Furthermore, it was found that atomic mass transformations between numeral systems, possibly interpretable as dimension degree steps, connected the atomic mass of codon bases with codon-grouped amino acids and with the exponent 2/3-series in several astonishing ways. Thus, it is suggested that they may be part of a deeper reference system. PMID:25623487

  13. A comparison of natural-image-based models of simple-cell coding.

    PubMed

    Willmore, B; Watters, P A; Tolhurst, D J

    2000-01-01

    Models such as that of Olshausen and Field (O&F, 1997 Vision Research 37 3311-3325) and principal components analysis (PCA) have been used to model simple-cell receptive fields, and to try to elucidate the statistical principles underlying visual coding in area V1. They connect the statistical structure of natural images with the statistical structure of the coding used in V1. The O&F model has created particular interest because the basis functions it produces resemble the receptive fields of simple cells. We evaluate these models in terms of their sparseness and dispersal, both of which have been suggested as desirable for efficient visual coding. However, both attributes have been defined ambiguously in the literature, and we have been obliged to formulate specific definitions in order to allow any comparison between models at all. We find that both attributes are strongly affected by any preprocessing (e.g. spectral pseudo-whitening or a logarithmic transformation) which is often applied to images before they are analysed by PCA or the O&F model. We also find that measures of sparseness are affected by the size of the filters--PCA filters with small receptive fields appear sparser than PCA filters with larger spatial extent. Finally, normalisation of the means and variances of filters influences measures of dispersal. It is necessary to control for all of these factors before making any comparisons between different models. Having taken these factors into account, we find that the code produced by the O&F model is somewhat sparser than the code produced by PCA. However, the difference is rather smaller than might have been expected, and a measure of dispersal is required to distinguish clearly between the two models. PMID:11144817

  14. Development and Testing of a Chemical Sputtering Model for the Monte Carlo Impurity (MCI) Code

    NASA Astrophysics Data System (ADS)

    Loh, Y. S.; Evans, T. E.; West, W. P.; Finkenthal, D. F.; Fenstermacher, M. E.; Porter, G. D.

    1997-11-01

    Fluid code calculations indicate that chemical sputtering may be an important process in high density, radiatively detached, tokamak divertor operations. A chemical sputtering model has been designed and installed into the DIII--D Monte Carlo Impurity (MCI) transport code. We will discuss how the model was constructed and the sources of atomic data used. Comparisons between chemical and physical sputtering yields will be presented for differing plasma conditions. Preliminary comparisons with DIII--D experimental data and a discussion of the benchmarking process will be presented.

  15. Pulsed Inductive Thruster (PIT): Modeling and Validation Using the MACH2 Code

    NASA Technical Reports Server (NTRS)

    Schneider, Steven (Technical Monitor); Mikellides, Pavlos G.

    2003-01-01

    Numerical modeling of the Pulsed Inductive Thruster exercising the magnetohydrodynamics code, MACH2 aims to provide bilateral validation of the thruster's measured performance and the code's capability of capturing the pertinent physical processes. Computed impulse values for helium and argon propellants demonstrate excellent correlation to the experimental data for a range of energy levels and propellant-mass values. The effects of the vacuum tank wall and massinjection scheme were investigated to show trivial changes in the overall performance. An idealized model for these energy levels and propellants deduces that the energy expended to the internal energy modes and plasma dissipation processes is independent of the propellant type, mass, and energy level.

  16. NMSDECAY: A Fortran code for supersymmetric particle decays in the Next-to-Minimal Supersymmetric Standard Model

    NASA Astrophysics Data System (ADS)

    Das, Debottam; Ellwanger, Ulrich; Teixeira, Ana M.

    2012-03-01

    The code NMSDECAY allows to compute widths and branching ratios of sparticle decays in the Next-to-Minimal Supersymmetric Standard Model. It is based on a generalization of SDECAY, to include the extended Higgs and neutralino sectors of the NMSSM. Slepton 3-body decays, possibly relevant in the case of a singlino-like lightest supersymmetric particle, have been added. NMSDECAY will be part of the NMSSMTools package, which computes Higgs, sparticle masses and Higgs decays in the NMSSM. Program summaryProgram title: NMSDECAY Catalogue identifier: AELC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 188 177 No. of bytes in distributed program, including test data, etc.: 1 896 478 Distribution format: tar.gz Programming language: FORTRAN77 Computer: All supporting g77, gfortran, ifort Operating system: All supporting g77, gfortran, ifort Classification: 11.1 External routines: Routines in the NMSSMTools package: At least one of the routines in the directory main (e.g. nmhdecay.f), all routines in the directory sources. (All software is included in the distribution package.) Nature of problem: Calculation of all decay widths and decay branching fractions of all particles in the Next-to-Minimal Supersymmetric Standard Model. Solution method: Suitable generalization of the code SDECAY [1] including the extended Higgs and neutralino sector of the Next-to-Minimal Supersymmetric Standard Model, and slepton 3-body decays. Additional comments: NMSDECAY is interfaced with NMSSMTools, available on the web page http://www.th.u-psud.fr/NMHDECAY/nmssmtools.html. Running time: On an Intel Core i7 with 2.8 GHZ: about 2 seconds per point in parameter space, if all flags flagqcd, flagmulti and flagloop are switched on.

  17. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.

  18. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods. PMID:23193454

  19. Data model description for the DESCARTES and CIDER codes. Hanford Environmental Dose Reconstruction Project

    SciTech Connect

    Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.

    1993-01-01

    The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy`s (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes.

  20. Modified-Gravity-GADGET: a new code for cosmological hydrodynamical simulations of modified gravity models

    NASA Astrophysics Data System (ADS)

    Puchwein, Ewald; Baldi, Marco; Springel, Volker

    2013-11-01

    We present a new massively parallel code for N-body and cosmological hydrodynamical simulations of modified gravity models. The code employs a multigrid-accelerated Newton-Gauss-Seidel relaxation solver on an adaptive mesh to efficiently solve for perturbations in the scalar degree of freedom of the modified gravity model. As this new algorithm is implemented as a module for the P-GADGET3 code, it can at the same time follow the baryonic physics included in P-GADGET3, such as hydrodynamics, radiative cooling and star formation. We demonstrate that the code works reliably by applying it to simple test problems that can be solved analytically, as well as by comparing cosmological simulations to results from the literature. Using the new code, we perform the first non-radiative and radiative cosmological hydrodynamical simulations of an f (R)-gravity model. We also discuss the impact of active galactic nucleus feedback on the matter power spectrum, as well as degeneracies between the influence of baryonic processes and modifications of gravity.

  1. MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution

    SciTech Connect

    Cox, Lawrence J.; Casswell, Laura

    2014-06-12

    This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.

  2. Time domain analysis of the weighted distributed order rheological model

    NASA Astrophysics Data System (ADS)

    Cao, Lili; Pu, Hai; Li, Yan; Li, Ming

    2016-05-01

    This paper presents the fundamental solution and relevant properties of the weighted distributed order rheological model in the time domain. Based on the construction of distributed order damper and the idea of distributed order element networks, this paper studies the weighted distributed order operator of the rheological model, a generalization of distributed order linear rheological model. The inverse Laplace transform on weighted distributed order operators of rheological model has been obtained by cutting the complex plane and computing the complex path integral along the Hankel path, which leads to the asymptotic property and boundary discussions. The relaxation response to weighted distributed order rheological model is analyzed, and it is closely related to many physical phenomena. A number of novel characteristics of weighted distributed order rheological model, such as power-law decay and intermediate phenomenon, have been discovered as well. And meanwhile several illustrated examples play important role in validating these results.

  3. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NASA Astrophysics Data System (ADS)

    van de Par, Steven; Kohlrausch, Armin; Heusdens, Richard; Jensen, Jesper; Jensen, Søren Holdt

    2005-12-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant) norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  4. Models of sporadic meteor body distributions

    NASA Technical Reports Server (NTRS)

    Andreev, V. V.; Belkovich, O. I.

    1987-01-01

    The distribution of orbital elements and flux density over the celestial sphere are the most common forms of representation of the meteor body distribution in the vicinity of the Earth's orbit. The determination of flux density distribution of sporadic meteor bodies was worked out. The method and its results are discussed.

  5. A simple model for induction core voltage distributions

    SciTech Connect

    Briggs, Richard J.; Fawley, William M.

    2004-07-01

    In fall 2003 T. Hughes of MRC used a full EM simulation code (LSP) to show that the electric field stress distribution near the outer radius of the longitudinal gaps between the four Metglas induction cores is very nonuniform in the original design of the DARHT-2 accelerator cells. In this note we derive a simple model of the electric field distribution in the induction core region to provide physical insights into this result. The starting point in formulating our model is to recognize that the electromagnetic fields in the induction core region of the DARHT-2 accelerator cells should be accurately represented within a quasi-static approximation because the timescale for the fields to change is much longer than the EM wave propagation time. The difficulty one faces is the fact that the electric field is a mixture of both a ''quasi-magnetostatic field'' (having a nonzero curl, with Bdot the source) and a ''quasi-electrostatic field'' (the source being electric charges on the various metal surfaces). We first discuss the EM field structure on the ''micro-scale'' of individual tape windings in Section 2. The insights from that discussion are then used to formulate a ''macroscopic'' description of the fields inside an ''equivalent homogeneous tape wound core region'' in Section 3. This formulation explicitly separates the nonlinear core magnetics from the quasi-electrostatic components of the electric field. In Section 4 a physical interpretation of the radial dependence of the electrostatic component of the electric field derived from this model is presented in terms of distributed capacitances, and the voltage distribution from gap to gap is related to various ''equivalent'' lumped capacitances. Analytic solutions of several simple multi-core cases are presented in Sections 5 and 6 to help provide physical insight into the effect of various proposed changes in the geometrical parameters of the DARHT-2 accelerator cell. Our results show that over most of the gap

  6. THATCH: A computer code for modelling thermal networks of high- temperature gas-cooled nuclear reactors

    SciTech Connect

    Kroeger, P.G.; Kennett, R.J.; Colman, J.; Ginsberg, T. )

    1991-10-01

    This report documents the THATCH code, which can be used to model general thermal and flow networks of solids and coolant channels in two-dimensional r-z geometries. The main application of THATCH is to model reactor thermo-hydraulic transients in High-Temperature Gas-Cooled Reactors (HTGRs). The available modules simulate pressurized or depressurized core heatup transients, heat transfer to general exterior sinks or to specific passive Reactor Cavity Cooling Systems, which can be air or water-cooled. Graphite oxidation during air or water ingress can be modelled, including the effects of added combustion products to the gas flow and the additional chemical energy release. A point kinetics model is available for analyzing reactivity excursions; for instance due to water ingress, and also for hypothetical no-scram scenarios. For most HTGR transients, which generally range over hours, a user-selected nodalization of the core in r-z geometry is used. However, a separate model of heat transfer in the symmetry element of each fuel element is also available for very rapid transients. This model can be applied coupled to the traditional coarser r-z nodalization. This report described the mathematical models used in the code and the method of solution. It describes the code and its various sub-elements. Details of the input data and file usage, with file formats, is given for the code, as well as for several preprocessing and postprocessing options. The THATCH model of the currently applicable 350 MW{sub th} reactor is described. Input data for four sample cases are given with output available in fiche form. Installation requirements and code limitations, as well as the most common error indications are listed. 31 refs., 23 figs., 32 tabs.

  7. A distribution model for the aerial application of granular agricultural particles

    NASA Technical Reports Server (NTRS)

    Fernandes, S. T.; Ormsbee, A. I.

    1978-01-01

    A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.

  8. Present capabilities and new developments in antenna modeling with the numerical electromagnetics code NEC

    SciTech Connect

    Burke, G.J.

    1988-04-08

    Computer modeling of antennas, since its start in the late 1960's, has become a powerful and widely used tool for antenna design. Computer codes have been developed based on the Method-of-Moments, Geometrical Theory of Diffraction, or integration of Maxwell's equations. Of such tools, the Numerical Electromagnetics Code-Method of Moments (NEC) has become one of the most widely used codes for modeling resonant sized antennas. There are several reasons for this including the systematic updating and extension of its capabilities, extensive user-oriented documentation and accessibility of its developers for user assistance. The result is that there are estimated to be several hundred users of various versions of NEC world wide. 23 refs., 10 figs.

  9. Users manual and modeling improvements for axial turbine design and performance computer code TD2-2

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    Computer code TD2 computes design point velocity diagrams and performance for multistage, multishaft, cooled or uncooled, axial flow turbines. This streamline analysis code was recently modified to upgrade modeling related to turbine cooling and to the internal loss correlation. These modifications are presented in this report along with descriptions of the code's expanded input and output. This report serves as the users manual for the upgraded code, which is named TD2-2.

  10. Development of a numerical computer code and circuit element models for simulation of firing systems

    SciTech Connect

    Carpenter, K.H. . Dept. of Electrical and Computer Engineering)

    1990-07-02

    Numerical simulation of firing systems requires both the appropriate circuit analysis framework and the special element models required by the application. We have modified the SPICE circuit analysis code (version 2G.6), developed originally at the Electronic Research Laboratory of the University of California, Berkeley, to allow it to be used on MSDOS-based, personal computers and to give it two additional circuit elements needed by firing systems--fuses and saturating inductances. An interactive editor and a batch driver have been written to ease the use of the SPICE program by system designers, and the interactive graphical post processor, NUTMEG, supplied by U. C. Berkeley with SPICE version 3B1, has been interfaced to the output from the modified SPICE. Documentation and installation aids have been provided to make the total software system accessible to PC users. Sample problems show that the resulting code is in agreement with the FIRESET code on which the fuse model was based (with some modifications to the dynamics of scaling fuse parameters). In order to allow for more complex simulations of firing systems, studies have been made of additional special circuit elements--switches and ferrite cored inductances. A simple switch model has been investigated which promises to give at least a first approximation to the physical effects of a non ideal switch, and which can be added to the existing SPICE circuits without changing the SPICE code itself. The effect of fast rise time pulses on ferrites has been studied experimentally in order to provide a base for future modeling and incorporation of the dynamic effects of changes in core magnetization into the SPICE code. This report contains detailed accounts of the work on these topics performed during the period it covers, and has appendices listing all source code written documentation produced.

  11. Motion-compensated coding and frame rate up-conversion: models and analysis.

    PubMed

    Dar, Yehuda; Bruckstein, Alfred M

    2015-07-01

    Block-based motion estimation (ME) and motion compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in diverse compression specifications, such as frame rates and bit rates. In this paper, we study the effect of frame rate and compression bit rate on block-based ME and MC as commonly utilized in inter-frame coding and frame rate up-conversion (FRUC). This joint examination yields a theoretical foundation for comparing MC procedures in coding and FRUC. First, the video signal is locally modeled as a noisy translational motion of an image. Then, we theoretically model the motion-compensated prediction of available and absent frames as in coding and FRUC applications, respectively. The theoretic MC-prediction error is studied further and its autocorrelation function is calculated, yielding useful separable-simplifications for the coding application. We argue that a linear relation exists between the variance of the MC-prediction error and temporal distance. While the relevant distance in MC coding is between the predicted and reference frames, MC-FRUC is affected by the distance between the frames available for interpolation. We compare our estimates with experimental results and show that the theory explains qualitatively the empirical behavior. Then, we use the models proposed to analyze a system for improving of video coding at low bit rates, using a spatio-temporal scaling. Although this concept is practically employed in various forms, so far it lacked a theoretical justification. We here harness the proposed MC models and present a comprehensive analysis of the system, to qualitatively predict the experimental results.

  12. ICRCCM (InterComparison of Radiation Codes used in Climate Models) Phase 2: Verification and calibration of radiation codes in climate models

    SciTech Connect

    Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.

    1990-01-01

    Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). SPECTRE will establish an absolute standard against which to compare models, and will aim to remove the hidden variables'' (unknown humidities, aerosols, etc.) which radiation modelers have invoked to excuse disagreements with observation. The data to be collected during SPECTRE will form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used to climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. Overall, the project is proceeding much as had been anticipated in the original proposal. The most significant accomplishments to date include the completion of the analysis of the original ICRCCM calculations, the completion of the initial sensitivity analysis of the radiation calculations for the effects of uncertainties in the measurement of water vapor and temperature and the acquisition and testing of the inexpensive spectrometers for use in the field experiment. The sensitivity analysis and the spectrometer tests given us much more confidence that the field experiment will yield the quality of data necessary to make a significant tests of and improvements to radiative transfer models used in climate studies.

  13. Stochastic Models for the Distribution of Index Terms.

    ERIC Educational Resources Information Center

    Nelson, Michael J.

    1989-01-01

    Presents a probability model of the occurrence of index terms used to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions give better fits than the simpler Zipf distribution, have the advantage of being more explanatory, and can incorporate a time parameter if necessary. (25…

  14. A Probabilistic Model for the Distribution of Authorships.

    ERIC Educational Resources Information Center

    Ajiferuke, Isola

    1991-01-01

    Discusses bibliometric studies of research collaboration and describes the development of a theoretical model for the distribution of authorship. The shifted Waring distribution model and 15 other probability models are tested for goodness-of-fit, and results are reported that indicate the shifted inverse Gaussian-Poisson model provides the best…

  15. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.

    1994-01-01

    A methodology for simulation of molecular mixing and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and non-reacting shear layer present in the facility given basic assumptions about turbulence properties.

  16. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  17. Solar optical codes evaluation for modeling and analyzing complex solar receiver geometries

    NASA Astrophysics Data System (ADS)

    Yellowhair, Julius; Ortega, Jesus D.; Christian, Joshua M.; Ho, Clifford K.

    2014-09-01

    Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.

  18. Stimulation at Desert Peak -modeling with the coupled THM code FEHM

    DOE Data Explorer

    kelkar, sharad

    2013-04-30

    Numerical modeling of the 2011 shear stimulation at the Desert Peak well 27-15. This submission contains the FEHM executable code for a 64-bit PC Windows-7 machine, and the input and output files for the results presented in the included paper from ARMA-213 meeting.

  19. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    ERIC Educational Resources Information Center

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  20. Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Ameri, Ali

    2005-01-01

    This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.

  1. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Voluntary National Model Building Codes E Exhibit E..., DEPARTMENT OF AGRICULTURE PROGRAM REGULATIONS CONSTRUCTION AND REPAIR Planning and Performing Construction and Other Development Pt. 1924, Subpt. A, Exh. E Exhibit E to Subpart A of Part...

  2. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Voluntary National Model Building Codes E Exhibit E to Subpart A of Part 1924 Agriculture Regulations of the Department of Agriculture (Continued) RURAL... and Other Development Pt. 1924, Subpt. A, Exh. E Exhibit E to Subpart A of Part...

  3. Atomic hydrogen distribution. [in Titan atmospheric model

    NASA Technical Reports Server (NTRS)

    Tabarie, N.

    1974-01-01

    Several possible H2 vertical distributions in Titan's atmosphere are considered with the constraint of 5 km-A a total quantity. Approximative calculations show that hydrogen distribution is quite sensitive to two other parameters of Titan's atmosphere: the temperature and the presence of other constituents. The escape fluxes of H and H2 are also estimated as well as the consequent distributions trapped in the Saturnian system.

  4. Variable continental distribution of polymorphisms in the coding regions of DNA-repair genes.

    PubMed

    Mathonnet, Géraldine; Labuda, Damian; Meloche, Caroline; Wambach, Tina; Krajinovic, Maja; Sinnett, Daniel

    2003-01-01

    DNA-repair pathways are critical for maintaining the integrity of the genetic material by protecting against mutations due to exposure-induced damages or replication errors. Polymorphisms in the corresponding genes may be relevant in genetic epidemiology by modifying individual cancer susceptibility or therapeutic response. We report data on the population distribution of potentially functional variants in XRCC1, APEX1, ERCC2, ERCC4, hMLH1, and hMSH3 genes among groups representing individuals of European, Middle Eastern, African, Southeast Asian and North American descent. The data indicate little interpopulation differentiation in some of these polymorphisms and typical FST values ranging from 10 to 17% at others. Low FST was observed in APEX1 and hMSH3 exon 23 in spite of their relatively high minor allele frequencies, which could suggest the effect of balancing selection. In XRCC1, hMSH3 exon 21 and hMLH1 Africa clusters either with Middle East and Europe or with Southeast Asia, which could be related to the demographic history of human populations, whereby human migrations and genetic drift rather than selection would account for the observed differences.

  5. Modeling Soil Moisture Fields Using the Distributed Hydrologic Model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Castillo, A. E.; Entekhabi, D.; Castelli, F.

    2011-12-01

    The Modello Bilancio Idrologico DIstributo e Continuo (MOBIDIC) is a fully-distributed physically-based basin hydrologic model [Castelli et al., 2009]. MOBIDIC represents watersheds using a system or reservoirs that interact through both mass and energy fluxes. The model uses a single-layered soil on a grid. For each grid element, soil moisture is conceptually partitioned into gravitational (free) and capillary-bound water. For computational parsimony, linear parameterization is used for infiltration rather than solving it using the nonlinear Richard's Equation. Previous applications of MOBIDIC assessed model performance based on streamflow which is a flux. In this study, the MOBIDIC simulated soil moisture, a state variable, is compared against observed values as well as values simulated by the legacy Simultaneous Heat and Water (SHAW) model [Flerchinger, 2000] which was chosen as the benchmark. Results of initial simulations with the original version of MOBIDIC prompted several model modifications such as changing the parameterization of evapotranspiration and adding capillary rise to make the model more robust in simulating the dynamics of soil moisture. In order to test the performance of the modified MOBIDIC, both short-term (a few weeks) and extended (multi-year) simulations were performed for 3 well-studied sites in the US: two sites are mountainous with deep groundwater table and semiarid climate, while the third site is fluvial with shallow groundwater table and temperate climate. For the multi-year simulations, both MOBIDIC and SHAW performed well in modeling the daily observed soil moisture. The simulations also illustrated the benefits of adding the capillary rise module and the other modifications introduced. Moreover, it was successfully demonstrated that MOBIDIC, with some conceptual approaches and some simplified parameterizations, can perform as good, if not better, than the more sophisticated SHAW model. References Castelli, F., G. Menduni, and B

  6. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    SciTech Connect

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  7. Modeling Improvements and Users Manual for Axial-flow Turbine Off-design Computer Code AXOD

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1994-01-01

    An axial-flow turbine off-design performance computer code used for preliminary studies of gas turbine systems was modified and calibrated based on the experimental performance of large aircraft-type turbines. The flow- and loss-model modifications and calibrations are presented in this report. Comparisons are made between computed performances and experimental data for seven turbines over wide ranges of speed and pressure ratio. This report also serves as the users manual for the revised code, which is named AXOD.

  8. Distortion-rate models for entropy-coded lattice vector quantization.

    PubMed

    Raffy, P; Antonini, M; Barlaud, M

    2000-01-01

    The increasing demand for real-time applications requires the use of variable-rate quantizers having good performance in the low bit rate domain. In order to minimize the complexity of quantization, as well as maintaining a reasonably high PSNR ratio, we propose to use an entropy-coded lattice vector quantizer (ECLVQ). These quantizers have proven to outperform the well-known EZW algorithm's performance in terms of rate-distortion tradeoff. In this paper, we focus our attention on the modeling of the mean squared error (MSE) distortion and the prefix code rate for ECLVQ. First, we generalize the distortion model of Jeong and Gibson (1993) on fixed-rate cubic quantizers to lattices under a high rate assumption. Second, we derive new rate models for ECLVQ, efficient at low bit rates without any high rate assumptions. Simulation results prove the precision of our models. PMID:18262939

  9. A simple modelling of mass diffusion effects on condensation with noncondensable gases for the CATHARE Code

    SciTech Connect

    Coste, P.; Bestion, D.

    1995-09-01

    This paper presents a simple modelling of mass diffusion effects on condensation. In presence of noncondensable gases, the mass diffusion near the interface is modelled using the heat and mass transfer analogy and requires normally an iterative procedure to calculate the interface temperature. Simplifications of the model and of the solution procedure are used without important degradation of the predictions. The model is assessed on experimental data for both film condensation in vertical tubes and direct contact condensation in horizontal tubes, including air-steam, Nitrogen-steam and Helium-steam data. It is implemented in the Cathare code, a french system code for nuclear reactor thermal hydraulics developed by CEA, EDF, and FRAMATOME.

  10. Flash flood modeling with the MARINE hydrological distributed model

    NASA Astrophysics Data System (ADS)

    Estupina-Borrell, V.; Dartus, D.; Ababou, R.

    2006-11-01

    Flash floods are characterized by their violence and the rapidity of their occurrence. Because these events are rare and unpredictable, but also fast and intense, their anticipation with sufficient lead time for warning and broadcasting is a primary subject of research. Because of the heterogeneities of the rain and of the behavior of the surface, spatially distributed hydrological models can lead to a better understanding of the processes and so on they can contribute to a better forecasting of flash flood. Our main goal here is to develop an operational and robust methodology for flash flood forecasting. This methodology should provide relevant data (information) about flood evolution on short time scales, and should be applicable even in locations where direct observations are sparse (e.g. absence of historical and modern rainfalls and streamflows in small mountainous watersheds). The flash flood forecast is obtained by the physically based, space-time distributed hydrological model "MARINE'' (Model of Anticipation of Runoff and INondations for Extreme events). This model is presented and tested in this paper for a real flash flood event. The model consists in two steps, or two components: the first component is a "basin'' flood module which generates flood runoff in the upstream part of the watershed, and the second component is the "stream network'' module, which propagates the flood in the main river and its subsidiaries. The basin flash flood generation model is a rainfall-runoff model that can integrate remotely sensed data. Surface hydraulics equations are solved with enough simplifying hypotheses to allow real time exploitation. The minimum data required by the model are: (i) the Digital Elevation Model, used to calculate slopes that generate runoff, it can be issued from satellite imagery (SPOT) or from French Geographical Institute (IGN); (ii) the rainfall data from meteorological radar, observed or anticipated by the French Meteorological Service (M

  11. A Hierarchical Model for Distributed Seismicity

    NASA Astrophysics Data System (ADS)

    Tejedor, A.; Gomez, J. B.; Pacheco, A. F.

    2009-04-01

    maximum earthquake magnitude expected in the simulated zone. The model has two parameters, c and u. Parameter c, called the coordination number, is a geometric parameter. It represents the number of boxes in a level m connected to a box in level m + 1; parameter u is the fraction of load that rises in the hierarchy due to a relaxation process. Therefore, the fraction 1 - u corresponds to the load that descends in the same process. The only two parameters of the model are fixed taking into account three characteristics of natural seismicity: (i) the power-law relationship between the size of an earthquake and the area of the displaced fault; (ii) the fact, observed in Geology, that the time of recurrence of large faults is shorter than that of small faults; and (iii) the percentages of aftershocks and mainshocks observed in earthquake catalogs. The model shows a self-organized critical behavior. It becomes manifest from both the observation of a steady state around which the load fluctuates, and the power law behavior of some of the properties of the system like the size-frequency distribution of relaxations (earthquakes). The exponent of this power law is around -1 for values of the parameters consistent with the three previous phenomenological observations. Two different strategies for the forecasting of the largest earthquakes in the model have been analyzed. The first one only takes into account the average recurrence time of the target earhquakes, whereas the second utilizes a known precursory pattern, the burst of aftershocks, which has been used for real earthquake prediction. The application of the latter strategy improves significantly the results obtained with the former. In summary, a conceptually simple model of the cellular automaton type with only two parameters can reproduce simultaneously several characteristics of real seismicity, like the Gutenberg-Richter law, shorter recurrence times for big faults compare to small ones, and percentages of aftershocks

  12. A finite-temperature Hartree-Fock code for shell-model Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bertsch, G. F.; Mehlhaff, J. M.

    2016-10-01

    The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.

  13. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    NASA Technical Reports Server (NTRS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  14. A users manual for the method of moments Aircraft Modeling Code (AMC), version 2

    NASA Technical Reports Server (NTRS)

    Peters, M. E.; Newman, E. H.

    1994-01-01

    This report serves as a user's manual for Version 2 of the 'Aircraft Modeling Code' or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. This report describes the input command language and also includes several examples which illustrate typical code inputs and outputs.

  15. A user's manual for the method of moments Aircraft Modeling Code (AMC)

    NASA Technical Reports Server (NTRS)

    Peters, M. E.; Newman, E. H.

    1989-01-01

    This report serves as a user's manual for the Aircraft Modeling Code or AMC. AMC is a user-oriented computer code, based on the method of moments (MM), for the analysis of the radiation and/or scattering from geometries consisting of a main body or fuselage shape with attached wings and fins. The shape of the main body is described by defining its cross section at several stations along its length. Wings, fins, rotor blades, and radiating monopoles can then be attached to the main body. Although AMC was specifically designed for aircraft or helicopter shapes, it can also be applied to missiles, ships, submarines, jet inlets, automobiles, spacecraft, etc. The problem geometry and run control parameters are specified via a two character command language input format. The input command language is described and several examples which illustrate typical code inputs and outputs are also included.

  16. MELMRK 2. 0: A description of computer models and results of code testing

    SciTech Connect

    Wittman, R.S. ); Denny, V.; Mertol, A. )

    1992-05-31

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.

  17. MELMRK 2.0: A description of computer models and results of code testing

    SciTech Connect

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-05-31

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly.

  18. 3D modeling of the electron energy distribution function in negative hydrogen ion sources.

    PubMed

    Terasaki, R; Fujino, I; Hatayama, A; Mizuno, T; Inoue, T

    2010-02-01

    For optimization and accurate prediction of the amount of H-ion production in negative ion sources, analysis of electron energy distribution function (EEDF) is necessary. We are developing a numerical code which analyzes EEDF in the tandem-type arc-discharge source. It is a three-dimensional Monte Carlo simulation code with realistic geometry and magnetic configuration. Coulomb collision between electrons is treated with the "binary collision" model and collisions with hydrogen species are treated with the "null-collision" method. We applied this code to the analysis of the JAEA 10 A negative ion source. The numerical result shows that the obtained EEDF is in good agreement with experimental results.

  19. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    SciTech Connect

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; March-Leuba, Jose A; Thurston, Carl; Hudson, Nathanael H.; Ireland, A.; Wysocki, A.

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, the capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.

  20. The Modeling of Advanced BWR Fuel Designs with the NRC Fuel Depletion Codes PARCS/PATHS

    DOE PAGES

    Ward, Andrew; Downar, Thomas J.; Xu, Y.; March-Leuba, Jose A; Thurston, Carl; Hudson, Nathanael H.; Ireland, A.; Wysocki, A.

    2015-04-22

    The PATHS (PARCS Advanced Thermal Hydraulic Solver) code was developed at the University of Michigan in support of U.S. Nuclear Regulatory Commission research to solve the steady-state, two-phase, thermal-hydraulic equations for a boiling water reactor (BWR) and to provide thermal-hydraulic feedback for BWR depletion calculations with the neutronics code PARCS (Purdue Advanced Reactor Core Simulator). The simplified solution methodology, including a three-equation drift flux formulation and an optimized iteration scheme, yields very fast run times in comparison to conventional thermal-hydraulic systems codes used in the industry, while still retaining sufficient accuracy for applications such as BWR depletion calculations. Lastly, themore » capability to model advanced BWR fuel designs with part-length fuel rods and heterogeneous axial channel flow geometry has been implemented in PATHS, and the code has been validated against previously benchmarked advanced core simulators as well as BWR plant and experimental data. We describe the modifications to the codes and the results of the validation in this paper.« less

  1. Gendist: An R Package for Generated Probability Distribution Models

    PubMed Central

    Abu Bakar, Shaiful Anuar; Nadarajah, Saralees; ABSL Kamarul Adzhar, Zahrul Azmir; Mohamed, Ibrahim

    2016-01-01

    In this paper, we introduce the R package gendist that computes the probability density function, the cumulative distribution function, the quantile function and generates random values for several generated probability distribution models including the mixture model, the composite model, the folded model, the skewed symmetric model and the arc tan model. These models are extensively used in the literature and the R functions provided here are flexible enough to accommodate various univariate distributions found in other R packages. We also show its applications in graphing, estimation, simulation and risk measurements. PMID:27272043

  2. Helioseismic Constraints on New Solar Models from the MoSEC Code

    NASA Technical Reports Server (NTRS)

    Elliott, J. R.

    1998-01-01

    Evolutionary solar models are computed using a new stellar evolution code, MOSEC (Modular Stellar Evolution Code). This code has been designed with carefully controlled truncation errors in order to achieve a precision which reflects the increasingly accurate determination of solar interior structure by helioseismology. A series of models is constructed to investigate the effects of the choice of equation of state (OPAL or MHD-E, the latter being a version of the MHD equation of state recalculated by the author), the inclusion of helium and heavy-element settling and diffusion, and the inclusion of a simple model of mixing associated with the solar tachocline. The neutrino flux predictions are discussed, while the sound speed of the computed models is compared to that of the sun via the latest inversion of SOI-NMI p-mode frequency data. The comparison between models calculated with the OPAL and MHD-E equations of state is particularly interesting because the MHD-E equation of state includes relativistic effects for the electrons, whereas neither MHD nor OPAL do. This has a significant effect on the sound speed of the computed model, worsening the agreement with the solar sound speed. Using the OPAL equation of state and including the settling and diffusion of helium and heavy elements produces agreement in sound speed with the helioseismic results to within about +.-0.2%; the inclusion of mixing slightly improves the agreement.

  3. Modelling and interpreting spectral energy distributions of galaxies with BEAGLE

    NASA Astrophysics Data System (ADS)

    Chevallard, Jacopo; Charlot, Stéphane

    2016-10-01

    We present a new-generation tool to model and interpret spectral energy distributions (SEDs) of galaxies, which incorporates in a consistent way the production of radiation and its transfer through the interstellar and intergalactic media. This flexible tool, named BEAGLE (for BayEsian Analysis of GaLaxy sEds), allows one to build mock galaxy catalogues as well as to interpret any combination of photometric and spectroscopic galaxy observations in terms of physical parameters. The current version of the tool includes versatile modelling of the emission from stars and photoionized gas, attenuation by dust and accounting for different instrumental effects, such as spectroscopic flux calibration and line spread function. We show a first application of the BEAGLE tool to the interpretation of broad-band SEDs of a published sample of ˜ 10^4 galaxies at redshifts 0.1 ≲ z ≲ 8. We find that the constraints derived on photometric redshifts using this multipurpose tool are comparable to those obtained using public, dedicated photometric-redshift codes and quantify this result in a rigorous statistical way. We also show how the post-processing of BEAGLE output data with the PYTHON extension PYP-BEAGLE allows the characterization of systematic deviations between models and observations, in particular through posterior predictive checks. The modular design of the BEAGLE tool allows easy extensions to incorporate, for example, the absorption by neutral galactic and circumgalactic gas, and the emission from an active galactic nucleus, dust and shock-ionized gas. Information about public releases of the BEAGLE tool will be maintained on http://www.jacopochevallard.org/beagle.

  4. Modeling the Delivery Physiology of Distributed Learning Systems.

    ERIC Educational Resources Information Center

    Paquette, Gilbert; Rosca, Ioan

    2003-01-01

    Discusses instructional delivery models and their physiology in distributed learning systems. Highlights include building delivery models; types of delivery models, including distributed classroom, self-training on the Web, online training, communities of practice, and performance support systems; and actors (users) involved, including experts,…

  5. Analysis Model for Domestic Hot Water Distribution Systems: Preprint

    SciTech Connect

    Maguire, J.; Krarti, M.; Fang, X.

    2011-11-01

    A thermal model was developed to estimate the energy losses from prototypical domestic hot water (DHW) distribution systems for homes. The developed model, using the TRNSYS simulation software, allows researchers and designers to better evaluate the performance of hot water distribution systems in homes. Modeling results were compared with past experimental study results and showed good agreement.

  6. Implementation of an anisotropic turbulence model in the COMMIX- 1C/ATM computer code

    SciTech Connect

    Bottoni, M.; Chang, F.C.

    1993-06-01

    The computer code COMMIX-1C/ATM, which describes single-phase, three-dimensional transient thermofluiddynamic problems, has provided the framework for the extension of the standard k-{var_epsilon} turbulence model to a six-equation model with additional transport equations for the turbulence heat fluxes and the variance of temperature fluctuations. The new, model, which allows simulation of anisotropic turbulence in stratified shear flows, is referred to as the Anisotropic Turbulence Model (ATM) has been verified with numerical computations of stable and unstable stratified shear flow between parallel plates.

  7. Rate quantization modeling for rate control of MPEG video coding and recording

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Liu, Bede

    1995-04-01

    For MPEG video coding and recording applications, it is important to select quantization parameters at slice and macroblock levels to produce nearly constant quality image for a given bit count budget. A well designed rate control strategy can improve overall image quality for video transmission over a constant-bit-rate channel and fulfill editing requirement of video recording, where a certain number of new pictures are encoded to replace consecutive frames on the storage media using at most the same number of bits. In this paper, we developed a feedback method with a rate-quantization model, which can be adapted to changes in picture activities. The model is used for quantization parameter selection at the frame and slice level. Extra computations needed are modest. Experiments show the accuracy of the model and the effectiveness of the proposed rate control method. A new bit allocation algorithm is then proposed for MPEG video coding.

  8. A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations.

    PubMed

    Battista, Nicholas A; Baird, Austin J; Miller, Laura A

    2015-11-01

    This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes.

  9. A Mathematical Model and MATLAB Code for Muscle-Fluid-Structure Simulations.

    PubMed

    Battista, Nicholas A; Baird, Austin J; Miller, Laura A

    2015-11-01

    This article provides models and code for numerically simulating muscle-fluid-structure interactions (FSIs). This work was presented as part of the symposium on Leading Students and Faculty to Quantitative Biology through Active Learning at the society-wide meeting of the Society for Integrative and Comparative Biology in 2015. Muscle mechanics and simple mathematical models to describe the forces generated by muscular contractions are introduced in most biomechanics and physiology courses. Often, however, the models are derived for simplifying cases such as isometric or isotonic contractions. In this article, we present a simple model of the force generated through active contraction of muscles. The muscles' forces are then used to drive the motion of flexible structures immersed in a viscous fluid. An example of an elastic band immersed in a fluid is first presented to illustrate a fully-coupled FSI in the absence of any external driving forces. In the second example, we present a valveless tube with model muscles that drive the contraction of the tube. We provide a brief overview of the numerical method used to generate these results. We also include as Supplementary Material a MATLAB code to generate these results. The code was written for flexibility so as to be easily modified to many other biological applications for educational purposes. PMID:26337187

  10. Coding theory based models for protein translation initiation in prokaryotic organisms.

    SciTech Connect

    May, Elebeoba Eni; Bitzer, Donald L. (North Carolina State University, Raleigh, NC); Rosnick, David I. (North Carolina State University, Raleigh, NC); Vouk, Mladen A.

    2003-03-01

    Our research explores the feasibility of using communication theory, error control (EC) coding theory specifically, for quantitatively modeling the protein translation initiation mechanism. The messenger RNA (mRNA) of Escherichia coli K-12 is modeled as a noisy (errored), encoded signal and the ribosome as a minimum Hamming distance decoder, where the 16S ribosomal RNA (rRNA) serves as a template for generating a set of valid codewords (the codebook). We tested the E. coli based coding models on 5' untranslated leader sequences of prokaryotic organisms of varying taxonomical relation to E. coli including: Salmonella typhimurium LT2, Bacillus subtilis, and Staphylococcus aureus Mu50. The model identified regions on the 5' untranslated leader where the minimum Hamming distance values of translated mRNA sub-sequences and non-translated genomic sequences differ the most. These regions correspond to the Shine-Dalgarno domain and the non-random domain. Applying the EC coding-based models to B. subtilis, and S. aureus Mu50 yielded results similar to those for E. coli K-12. Contrary to our expectations, the behavior of S. typhimurium LT2, the more taxonomically related to E. coli, resembled that of the non-translated sequence group.

  11. SCDAP/RELAP5/MOD 3.1 code manual: Damage progression model theory. Volume 2

    SciTech Connect

    Davis, K.L.; Allison, C.M.; Berna, G.A.

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, the core, fission products released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume contains detailed descriptions of the severe accident models and correlations. It provides the user with the underlying assumptions and simplifications used to generate and implement the basic equations into the code, so an intelligent assessment of the applicability and accuracy of the resulting calculation can be made.

  12. Wind turbine control systems: Dynamic model development using system identification and the fast structural dynamics code

    SciTech Connect

    Stuart, J.G.; Wright, A.D.; Butterfield, C.P.

    1996-10-01

    Mitigating the effects of damaging wind turbine loads and responses extends the lifetime of the turbine and, consequently, reduces the associated Cost of Energy (COE). Active control of aerodynamic devices is one option for achieving wind turbine load mitigation. Generally speaking, control system design and analysis requires a reasonable dynamic model of {open_quotes}plant,{close_quotes} (i.e., the system being controlled). This paper extends the wind turbine aileron control research, previously conducted at the National Wind Technology Center (NWTC), by presenting a more detailed development of the wind turbine dynamic model. In prior research, active aileron control designs were implemented in an existing wind turbine structural dynamics code, FAST (Fatigue, Aerodynamics, Structures, and Turbulence). In this paper, the FAST code is used, in conjunction with system identification, to generate a wind turbine dynamic model for use in active aileron control system design. The FAST code is described and an overview of the system identification technique is presented. An aileron control case study is used to demonstrate this modeling technique. The results of the case study are then used to propose ideas for generalizing this technique for creating dynamic models for other wind turbine control applications.

  13. Numerical modeling of immiscible two-phase flow in micro-models using a commercial CFD code

    SciTech Connect

    Crandall, Dustin; Ahmadia, Goodarz; Smith, Duane H.

    2009-01-01

    Off-the-shelf CFD software is being used to analyze everything from flow over airplanes to lab-on-a-chip designs. So, how accurately can two-phase immiscible flow be modeled flowing through some small-scale models of porous media? We evaluate the capability of the CFD code FLUENT{trademark} to model immiscible flow in micro-scale, bench-top stereolithography models. By comparing the flow results to experimental models we show that accurate 3D modeling is possible.

  14. Prediction of Parameters Distribution of Upward Boiling Two-Phase Flow With Two-Fluid Models

    SciTech Connect

    Yao, Wei; Morel, Christophe

    2002-07-01

    In this paper, a multidimensional two-fluid model with additional turbulence k - {epsilon} equations is used to predict the two-phase parameters distribution in freon R12 boiling flow. The 3D module of the CATHARE code is used for numerical calculation. The DEBORA experiment has been chosen to evaluate our models. The radial profiles of the outlet parameters were measured by means of an optical probe. The comparison of the radial profiles of void fraction, liquid temperature, gas velocity and volumetric interfacial area at the end of the heated section shows that the multidimensional two-fluid model with proper constitutive relations can yield reasonably predicted results in boiling conditions. Sensitivity tests show that the turbulent dispersion force, which involves the void fraction gradient, plays an important role in determining the void fraction distribution; and the turbulence eddy viscosity is a significant factor to influence the liquid temperature distribution. (authors)

  15. Determination of the statistical distributions of model parameters for probabilistic risk assessment

    SciTech Connect

    Fields, D.E.; Glandon, S.R.

    1981-01-01

    Successful probabilistic risk assessment depends heavily on knowledge of the distribution of model parameters we have developed. The TERPED computer code is a versatile methodology for determining with what confidence a parameter set may be considered to have a normal or lognormal frequency distribution. Several measures of central tendency are computed. Other options include computation of the chi-square statistic, the Kolmogorov-Smirnov non-parametric statistic, and Pearson's correlation coefficient. Cumulative probability plots are produced either in high resolution (pen-and-ink or film) or in printerplot form.

  16. Distributed generation capabilities of the national energy modeling system

    SciTech Connect

    LaCommare, Kristina Hamachi; Edwards, Jennifer L.; Marnay, Chris

    2003-01-01

    This report describes Berkeley Lab's exploration of how the National Energy Modeling System (NEMS) models distributed generation (DG) and presents possible approaches for improving how DG is modeled. The on-site electric generation capability has been available since the AEO2000 version of NEMS. Berkeley Lab has previously completed research on distributed energy resources (DER) adoption at individual sites and has developed a DER Customer Adoption Model called DER-CAM. Given interest in this area, Berkeley Lab set out to understand how NEMS models small-scale on-site generation to assess how adequately DG is treated in NEMS, and to propose improvements or alternatives. The goal is to determine how well NEMS models the factors influencing DG adoption and to consider alternatives to the current approach. Most small-scale DG adoption takes place in the residential and commercial modules of NEMS. Investment in DG ultimately offsets purchases of electricity, which also eliminates the losses associated with transmission and distribution (T&D). If the DG technology that is chosen is photovoltaics (PV), NEMS assumes renewable energy consumption replaces the energy input to electric generators. If the DG technology is fuel consuming, consumption of fuel in the electric utility sector is replaced by residential or commercial fuel consumption. The waste heat generated from thermal technologies can be used to offset the water heating and space heating energy uses, but there is no thermally activated cooling capability. This study consists of a review of model documentation and a paper by EIA staff, a series of sensitivity runs performed by Berkeley Lab that exercise selected DG parameters in the AEO2002 version of NEMS, and a scoping effort of possible enhancements and alternatives to NEMS current DG capabilities. In general, the treatment of DG in NEMS is rudimentary. The penetration of DG is determined by an economic cash-flow analysis that determines adoption based on the

  17. A unified model for the spatial and mass distribution of subhaloes

    NASA Astrophysics Data System (ADS)

    Han, Jiaxin; Cole, Shaun; Frenk, Carlos S.; Jing, Yipeng

    2016-04-01

    N-body simulations suggest that the substructures that survive inside dark matter haloes follow universal distributions in mass and radial number density. We demonstrate that a simple analytical model can explain these subhalo distributions as resulting from tidal stripping which increasingly reduces the mass of subhaloes with decreasing halocentric distance. As a starting point, the spatial distribution of subhaloes of any given infall mass is shown to be largely indistinguishable from the overall mass distribution of the host halo. Using a physically motivated statistical description of the amount of mass stripped from individual subhaloes, the model fully describes the joint distribution of subhaloes in final mass, infall mass and radius. As a result, it can be used to predict several derived distributions involving combinations of these quantities including, but not limited to, the universal subhalo mass function, the subhalo spatial distribution, the gravitational lensing profile, the dark matter annihilation radiation profile and boost factor. This model clarifies a common confusion when comparing the spatial distributions of galaxies and subhaloes, the so-called anti-bias, as a simple selection effect. We provide a PYTHON code SUBGEN for populating haloes with subhaloes at http://icc.dur.ac.uk/data/.

  18. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  19. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    NASA Astrophysics Data System (ADS)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  20. SOCIAL ADVERSITY, GENETIC VARIATION, STREET CODE, AND AGGRESSION: A GENETICLLY INFORMED MODEL OF VIOLENT BEHAVIOR

    PubMed Central

    Simons, Ronald L.; Lei, Man Kit; Stewart, Eric A.; Brody, Gene H.; Beach, Steven R. H.; Philibert, Robert A.; Gibbons, Frederick X.

    2011-01-01

    Elijah Anderson (1997, 1999) argues that exposure to extreme community disadvantage, residing in “street” families, and persistent discrimination encourage many African Americans to develop an oppositional culture that he labels the “code of the street.” Importantly, while the adverse conditions described by Anderson increase the probability of adopting the code of the street, most of those exposed to these adverse conditions do not do so. The present study examines the extent to which genetic variation accounts for these differences. Although the diathesis-stress model guides most genetically informed behavior science, the present study investigates hypotheses derived from the differential susceptibility perspective (Belsky & Pluess, 2009). This model posits that some people are genetically predisposed to be more susceptible to environmental influence than others. An important implication of the model is that those persons most vulnerable to adverse social environments are the same ones who reap the most benefit from environmental support. Using longitudinal data from a sample of several hundred African American males, we examined the manner in which variants in three genes - 5-HTT, DRD4, and MAOA - modulate the effect of community and family adversity on adoption of the street code and aggression. We found strong support for the differential susceptibility perspective. When the social environment was adverse, individuals with these genetic variants manifested more commitment to the street code and aggression than those with other genotypes, whereas when adversity was low they demonstrated less commitment to the street code and aggression than those with other genotypes. PMID:23785260