Science.gov

Sample records for distributed coding model

  1. RHOCUBE: 3D density distributions modeling code

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Agliozzo, Claudia

    2016-11-01

    RHOCUBE models 3D density distributions on a discrete Cartesian grid and their integrated 2D maps. It can be used for a range of applications, including modeling the electron number density in LBV shells and computing the emission measure. The RHOCUBE Python package provides several 3D density distributions, including a powerlaw shell, truncated Gaussian shell, constant-density torus, dual cones, and spiralling helical tubes, and can accept additional distributions. RHOCUBE provides convenient methods for shifts and rotations in 3D, and if necessary, an arbitrary number of density distributions can be combined into the same model cube and the integration ∫ dz performed through the joint density field.

  2. DISTRIBUTED CONTAINER FAILURE MODELS FOR THE DUST-MS COMPUTER CODE.

    SciTech Connect

    SULLIVAN,T.; DE LEMOS,F.

    2001-02-24

    Improvements to the DUST-MS computer code have been made that permit simulation of distributed container failure rates. The new models permit instant failure of all containers within a computational volume, uniform failure of these containers over time, or a normal distribution in container failures. Incorporation of a distributed failure model requires wasteform releases to be calculated using a convolution integral. In addition, the models permit a unique time of emplacement for each modeled container and allow a fraction of the containers to fail at emplacement. Implementation of these models, verification testing, and an example problem comparing releases from a wasteform with a two-species decay chain as a function of failure distribution are presented in the paper.

  3. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  4. UNIX code management and distribution

    SciTech Connect

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.

  5. Model Children's Code.

    ERIC Educational Resources Information Center

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  6. Distribution Coding in the Visual Pathway

    PubMed Central

    Sanderson, A. C.; Kozak, W. M.; Calvert, T. W.

    1973-01-01

    Although a variety of types of spike interval histograms have been reported, little attention has been given to the spike interval distribution as a neural code and to how different distributions are transmitted through neural networks. In this paper we present experimental results showing spike interval histograms recorded from retinal ganglion cells of the cat. These results exhibit a clear correlation between spike interval distribution and stimulus condition at the retinal ganglion cell level. The averaged mean rates of the cells studied were nearly the same in light as in darkness whereas the spike interval histograms were much more regular in light than in darkness. We present theoretical models which illustrate how such a distribution coding at the retinal level could be “interpreted” or recorded at some higher level of the nervous system such as the lateral geniculate nucleus. Interpretation is an essential requirement of a neural code which has often been overlooked in modeling studies. Analytical expressions are derived describing the role of distribution coding in determining the transfer characteristics of a simple interaction model and of a lateral inhibition network. Our work suggests that distribution coding might be interpreted by simply interconnected neural networks such as relay cell networks, in general, and the primary thalamic sensory nuclei in particular. PMID:4697235

  7. The triple distribution of codes and ordered codes

    PubMed Central

    Trinker, Horst

    2011-01-01

    We study the distribution of triples of codewords of codes and ordered codes. Schrijver [A. Schrijver, New code upper bounds from the Terwilliger algebra and semidefinite programming, IEEE Trans. Inform. Theory 51 (8) (2005) 2859–2866] used the triple distribution of a code to establish a bound on the number of codewords based on semidefinite programming. In the first part of this work, we generalize this approach for ordered codes. In the second part, we consider linear codes and linear ordered codes and present a MacWilliams-type identity for the triple distribution of their dual code. Based on the non-negativity of this linear transform, we establish a linear programming bound and conclude with a table of parameters for which this bound yields better results than the standard linear programming bound. PMID:22505770

  8. Distributed transform coding via source-splitting

    NASA Astrophysics Data System (ADS)

    Yahampath, Pradeepa

    2012-12-01

    Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.

  9. Efficient context-dependent model building based on clustering posterior distributions for non-coding sequences

    PubMed Central

    Baele, Guy; Van de Peer, Yves; Vansteelandt, Stijn

    2009-01-01

    Background Many recent studies that relax the assumption of independent evolution of sites have done so at the expense of a drastic increase in the number of substitution parameters. While additional parameters cannot be avoided to model context-dependent evolution, a large increase in model dimensionality is only justified when accompanied with careful model-building strategies that guard against overfitting. An increased dimensionality leads to increases in numerical computations of the models, increased convergence times in Bayesian Markov chain Monte Carlo algorithms and even more tedious Bayes Factor calculations. Results We have developed two model-search algorithms which reduce the number of Bayes Factor calculations by clustering posterior densities to decide on the equality of substitution behavior in different contexts. The selected model's fit is evaluated using a Bayes Factor, which we calculate via model-switch thermodynamic integration. To reduce computation time and to increase the precision of this integration, we propose to split the calculations over different computers and to appropriately calibrate the individual runs. Using the proposed strategies, we find, in a dataset of primate Ancestral Repeats, that careful modeling of context-dependent evolution may increase model fit considerably and that the combination of a context-dependent model with the assumption of varying rates across sites offers even larger improvements in terms of model fit. Using a smaller nuclear SSU rRNA dataset, we show that context-dependence may only become detectable upon applying model-building strategies. Conclusion While context-dependent evolutionary models can increase the model fit over traditional independent evolutionary models, such complex models will often contain too many parameters. Justification for the added parameters is thus required so that only those parameters that model evolutionary processes previously unaccounted for are added to the evolutionary

  10. Colour cyclic code for Brillouin distributed sensors

    NASA Astrophysics Data System (ADS)

    Le Floch, Sébastien; Sauser, Florian; Llera, Miguel; Rochat, Etienne

    2015-09-01

    For the first time, a colour cyclic coding (CCC) is theoretically and experimentally demonstrated for Brillouin optical time-domain analysis (BOTDA) distributed sensors. Compared to traditional intensity-modulated cyclic codes, the code presents an additional gain of √2 while keeping the same number of sequences as for a colour coding. A comparison with a standard BOTDA sensor is realized and validates the theoretical coding gain.

  11. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  12. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  13. A distributed particle simulation code in C++

    SciTech Connect

    Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.

    1992-03-01

    Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.

  14. A distributed particle simulation code in C++

    SciTech Connect

    Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.

    1992-01-01

    Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.

  15. Development of a computer code to calculate the distribution of radionuclides within the human body by the biokinetic models of the ICRP.

    PubMed

    Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki

    2015-03-01

    This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data.

  16. A numerical model for the computation of radiance distributions in natural waters with wind-roughened surfaces. Part 2: User's guide and code listing

    NASA Astrophysics Data System (ADS)

    Mobley, Curtis D.

    1988-07-01

    This report is a users' guide for and listing of the FORTRAN V computer code that implements a numerical procedure for computing radiance distributions in natural waters. The mathematical details of the numerical radiance model are described in a companion report (A Numerical Model for the Computation of Radiance Distributions in Natural Waters with Wind-Roughened Surfaces, by Curtis D. Mobley and Rudolph W. Preisendorfer, NOAA Technical Memorandum ERL PMEL-75). The present report describes how to run the computer model and therefore addresses questions such as which routines perform which calculations, what input is required by the various programs, and what is the file structure of the overall program.

  17. Robust entanglement distribution via quantum network coding

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-10-01

    Many protocols of quantum information processing, like quantum key distribution or measurement-based quantum computation, ‘consume’ entangled quantum states during their execution. When participants are located at distant sites, these resource states need to be distributed. Due to transmission losses quantum repeater become necessary for large distances (e.g. ≳ 300 {{km}}). Here we generalize the concept of the graph state repeater to D-dimensional graph states and to repeaters that can perform basic measurement-based quantum computations, which we call quantum routers. This processing of data at intermediate network nodes is called quantum network coding. We describe how a scheme to distribute general two-colourable graph states via quantum routers with network coding can be constructed from classical linear network codes. The robustness of the distribution of graph states against outages of network nodes is analysed by establishing a link to stabilizer error correction codes. Furthermore we show, that for any stabilizer error correction code there exists a corresponding quantum network code with similar error correcting capabilities.

  18. Frequency-coded quantum key distribution.

    PubMed

    Bloch, Matthieu; McLaughlin, Steven W; Merolla, Jean-Marc; Patois, Frédéric

    2007-02-01

    We report an intrinsically stable quantum key distribution scheme based on genuine frequency-coded quantum states. The qubits are efficiently processed without fiber interferometers by fully exploiting the nonlinear interaction occurring in electro-optic phase modulators. The system requires only integrated off-the-shelf devices and could be used with a true single-photon source. Preliminary experiments have been performed with weak laser pulses and have demonstrated the feasibility of this new setup.

  19. Implementation of a double Gaussian source model for the BEAMnrc Monte Carlo code and its influence on small fields dose distributions.

    PubMed

    Doerner, Edgardo; Caprile, Paola

    2016-09-01

    The shape of the radiation source of a linac has a direct impact on the delivered dose distributions, especially in the case of small radiation fields. Traditionally, a single Gaussian source model is used to describe the electron beam hitting the target, although different studies have shown that the shape of the electron source can be better described by a mixed distribution consisting of two Gaussian components. Therefore, this study presents the implementation of a double Gaussian source model into the BEAMnrc Monte Carlo code. The impact of the double Gaussian source model for a 6 MV beam is assessed through the comparison of different dosimetric parameters calculated using a single Gaussian source, previously commissioned, the new double Gaussian source model and measurements, performed with a diode detector in a water phantom. It was found that the new source can be easily implemented into the BEAMnrc code and that it improves the agreement between measurements and simulations for small radiation fields. The impact of the change in source shape becomes less important as the field size increases and for increasing distance of the collimators to the source, as expected. In particular, for radiation fields delivered using stereotactic collimators located at a distance of 59 cm from the source, it was found that the effect of the double Gaussian source on the calculated dose distributions is negligible, even for radiation fields smaller than 5 mm in diameter. Accurate determination of the shape of the radiation source allows us to improve the Monte Carlo modeling of the linac, especially for treatment modalities such as IMRT, were the radiation beams used could be very narrow, becoming more sensitive to the shape of the source. PACS number(s): 87.53.Bn, 87.55.K, 87.56.B-, 87.56.jf.

  20. FIBWR: a steady-state core flow distribution code for boiling water reactors code verification and qualification report. Final report

    SciTech Connect

    Ansari, A.F.; Gay, R.R.; Gitnick, B.J.

    1981-07-01

    A steady-state core flow distribution code (FIBWR) is described. The ability of the recommended models to predict various pressure drop components and void distribution is shown by comparison to the experimental data. Application of the FIBWR code to the Vermont Yankee Nuclear Power Station is shown by comparison to the plant measured data.

  1. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.

  2. Cheetah: Starspot modeling code

    NASA Astrophysics Data System (ADS)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  3. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  4. Time coded distribution via broadcasting stations

    NASA Technical Reports Server (NTRS)

    Leschiutta, S.; Pettiti, V.; Detoma, E.

    1979-01-01

    The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.

  5. The weight distribution and randomness of linear codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1989-01-01

    Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all.

  6. A MCTF video coding scheme based on distributed source coding principles

    NASA Astrophysics Data System (ADS)

    Tagliasacchi, Marco; Tubaro, Stefano

    2005-07-01

    Motion Compensated Temporal Filtering (MCTF) has proved to be an efficient coding tool in the design of open-loop scalable video codecs. In this paper we propose a MCTF video coding scheme based on lifting where the prediction step is implemented using PRISM (Power efficient, Robust, hIgh compression Syndrome-based Multimedia coding), a video coding framework built on distributed source coding principles. We study the effect of integrating the update step at the encoder or at the decoder side. We show that the latter approach allows to improve the quality of the side information exploited during decoding. We present the analytical results obtained by modeling the video signal along the motion trajectories as a first order auto-regressive process. We show that the update step at the decoder allows to half the contribution of the quantization noise. We also include experimental results with real video data that demonstrate the potential of this approach when the video sequences are coded at low bitrates.

  7. Distributed source coding using chaos-based cryptosystem

    NASA Astrophysics Data System (ADS)

    Zhou, Junwei; Wong, Kwok-Wo; Chen, Jianyong

    2012-12-01

    A distributed source coding scheme is proposed by incorporating a chaos-based cryptosystem in the Slepian-Wolf coding. The punctured codeword generated by the chaos-based cryptosystem results in ambiguity at the decoder side. This ambiguity can be removed by the maximum a posteriori decoding with the help of side information. In this way, encryption and source coding are performed simultaneously. This leads to a simple encoder structure with low implementation complexity. Simulation results show that the encoder complexity is lower than that of existing distributed source coding schemes. Moreover, at small block size, the proposed scheme has a performance comparable to existing distributed source coding schemes.

  8. Achieving H.264-like compression efficiency with distributed video coding

    NASA Astrophysics Data System (ADS)

    Milani, Simone; Wang, Jiajun; Ramchandran, Kannan

    2007-01-01

    Recently, a new class of distributed source coding (DSC) based video coders has been proposed to enable low-complexity encoding. However, to date, these low-complexity DSC-based video encoders have been unable to compress as efficiently as motion-compensated predictive coding based video codecs, such as H.264/AVC, due to insufficiently accurate modeling of video data. In this work, we examine achieving H.264-like high compression efficiency with a DSC-based approach without the encoding complexity constraint. The success of H.264/AVC highlights the importance of accurately modeling the highly non-stationary video data through fine-granularity motion estimation. This motivates us to deviate from the popular approach of approaching the Wyner-Ziv bound with sophisticated capacity-achieving channel codes that require long block lengths and high decoding complexity, and instead focus on accurately modeling video data. Such a DSC-based, compression-centric encoder is an important first step towards building a robust DSC-based video coding framework.

  9. Sparsey™: event recognition via deep hierarchical sparse distributed codes.

    PubMed

    Rinkus, Gerard J

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, "mac"), at each level. In localism, each represented feature/concept/event (hereinafter "item") is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge ("Big Data") problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal patterns.

  10. Sparsey™: event recognition via deep hierarchical sparse distributed codes

    PubMed Central

    Rinkus, Gerard J.

    2014-01-01

    The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal

  11. Use of bar codes in inpatient drug distribution.

    PubMed

    Meyer, G E; Brandell, R; Smith, J E; Milewski, F J; Brucker, P; Coniglio, M

    1991-05-01

    The development and operation of a prototype inpatient drug distribution system that uses bar codes is described, and the impact of bar coding on the cassette-filling and verification process is summarized. A prototype pharmacy dispensing site was created to function in parallel with an existing satellite dispensing site that served 78 general medical-care beds. Supplemental labels encoded with an 11-digit unique product identification number, a 5-digit expiration date, and a 6-character lot number were generated and affixed to all unit dose packages dispensed from the prototype pharmacy site. The unit doses were labeled with Code 49 symbology; each label measured 0.8 x 1.25 inches. Each patient cassette was labeled using Code 39 symbology. A cost-benefit model was developed, and the two dispensing systems were compared with respect to (1) time to fill patient cassettes, (2) time to verify patient cassettes, (3) time to process patient charges and credits, (4) time to correct dispensing errors, (5) accuracy of the cassette-filling process, and (6) accuracy of the cassette verification process. Bar-code dispensing and verification saved 1.52 seconds per dose. Additionally, the cassette verification function was shifted from pharmacists to technicians. Estimated per-dose cost of the bar-code system was 2.73 cents. A measurable improvement in the accuracy of filling patient cassettes was documented. The feasibility of using bar codes in unit dose dispensing was demonstrated, and the prototype system was shown to produce cost efficiencies and patient-care benefits.

  12. Impacts of Model Building Energy Codes

    SciTech Connect

    Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.; Liu, Bing; Bartlett, Rosemarie

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  13. The SEL macroscopic modeling code

    NASA Astrophysics Data System (ADS)

    Glasser, A. H.; Tang, X. Z.

    2004-12-01

    The SEL (Spectral ELement) macroscopic modeling code for magnetically confined plasma combines adaptive spectral element spatial discretization and nonlinearly implicit time stepping via Newton's method on massively parallel computers. Static condensation is implemented to construct the Shur complement of the Jacobian matrix, which greatly accelerates the linear system solution and distinguishes itself from conventional Newton-Krylov schemes. Grid alignment with the evolving magnetic field, implemented with a variational principle, is a key component of grid adaptation in SEL, and is critical to toroidal plasma applications. Results of 2D magnetic reconnection are shown to illustrate the accuracy and efficiency of the parallel algorithms built on the Portable, Extensible Toolkits for Scientific Computing (PETSC) framework.

  14. Modeling of Dose Distribution for a Proton Beam Delivering System with the use of the Multi-Particle Transport Code 'Fluka'

    SciTech Connect

    Mumot, Marta; Agapov, Alexey

    2007-11-26

    We have developed a new delivering system for hadron therapy which uses a multileaf collimator and a range shifter. We simulate our delivering beam system with the multi-particle transport code 'Fluka'. From these simulations we obtained information about the dose distributions, about stars generated in the delivering system elements and also information about the neutron flux. All the informations obtained were analyzed from the point of view of radiation protection, homogeneity of beam delivery to patient body, and also in order to improve some modifiers used.

  15. Joint distributed source-channel coding for 3D videos

    NASA Astrophysics Data System (ADS)

    Palma, Veronica; Cancellaro, Michela; Neri, Alessandro

    2011-03-01

    This paper presents a distributed joint source-channel 3D video coding system. Our aim is the design of an efficient coding scheme for stereoscopic video communication over noisy channels that preserves the perceived visual quality while guaranteeing a low computational complexity. The drawback in using stereo sequences is the increased amount of data to be transmitted. Several methods are being used in the literature for encoding stereoscopic video. A significantly different approach respect to traditional video coding has been represented by Distributed Video Coding (DVC), which introduces a flexible architecture with the design of low complex video encoders. In this paper we propose a novel method for joint source-channel coding in a distributed approach. We choose turbo code for our application and study the new setting of distributed joint source channel coding of a video. Turbo code allows to send the minimum amount of data while guaranteeing near channel capacity error correcting performance. In this contribution, the mathematical framework will be fully detailed and tradeoff among redundancy and perceived quality and quality of experience will be analyzed with the aid of numerical experiments.

  16. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  17. Dual-code quantum computation model

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation model—a fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  18. DUCS—A fully automated code and documentation distribution system

    NASA Astrophysics Data System (ADS)

    Johnson, A. S.; Saitta, B.; Gervasi, O.; Bower, G. R.; Rothenberg, A.; Waite, A. P.

    1990-08-01

    The Distributed Updata Control System (DUCS) is a code distribution system developed for the SLD collaboration to distribute code, documentation and news times between remote collaborators and SLAC. The system runs on both VM and VMS systems and is currently running at a total of 18 sites on two different continents, using both BITNET and DECNET connections. Software updates and news items can be submitted from any site where DUCS is installed and are distributed to all other sites. When an update arrives at a remote site it is installed appropriately without any manual intervention. The details of the installation depend on the type of file, but for source code, installation includes compilation and the insertion of the resulting object module into the appropriate library. Whenever an error occurs the error log is returned to the originator of the update. DUCS maintains both development and production code, subdivided into an arbitrary number of sections. A mechanism is provided to move code from the development area to the production area. DUCS also contains many utilities which enable the status of each node to be ascertained and any manual intervention necessary to correct unanticipated conditions to be performed. The system has been running now for nearly three years and has distributed over 20,000 code updates. It is proving a valuable tool for remote collaborators who are now able to participate in code development as easily as if they were at SLAC.

  19. Distributed joint source-channel coding in wireless sensor networks.

    PubMed

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency.

  20. Codon Distribution in Error-Detecting Circular Codes

    PubMed Central

    Fimmel, Elena; Strüngmann, Lutz

    2016-01-01

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes. PMID:26999215

  1. Codon Distribution in Error-Detecting Circular Codes.

    PubMed

    Fimmel, Elena; Strüngmann, Lutz

    2016-03-15

    In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick's hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C³ and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C(3) codes to maximal self-complementary circular codes.

  2. Link-Adaptive Distributed Coding for Multisource Cooperation

    NASA Astrophysics Data System (ADS)

    Cano, Alfonso; Wang, Tairan; Ribeiro, Alejandro; Giannakis, Georgios B.

    2007-12-01

    Combining multisource cooperation and link-adaptive regenerative techniques, a novel protocol is developed capable of achieving diversity order up to the number of cooperating users and large coding gains. The approach relies on a two-phase protocol. In Phase 1, cooperating sources exchange information-bearing blocks, while in Phase 2, they transmit reencoded versions of the original blocks. Different from existing approaches, participation in the second phase does not require correct decoding of Phase 1 packets. This allows relaying of soft information to the destination, thus increasing coding gains while retaining diversity properties. For any reencoding function the diversity order is expressed as a function of the rank properties of the distributed coding strategy employed. This result is analogous to the diversity properties of colocated multi-antenna systems. Particular cases include repetition coding, distributed complex field coding (DCFC), distributed space-time coding, and distributed error-control coding. Rate, diversity, complexity and synchronization issues are elaborated. DCFC emerges as an attractive choice because it offers high-rate, full spatial diversity, and relaxed synchronization requirements. Simulations confirm analytically established assessments.

  3. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  4. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  5. Error resiliency of distributed video coding in wireless video communication

    NASA Astrophysics Data System (ADS)

    Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj

    2008-08-01

    Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.

  6. Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.

    PubMed

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2016-03-01

    Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.

  7. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  8. Efficiency of a model human image code

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  9. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  10. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model Code...

  11. Genetic coding and gene expression - new Quadruplet genetic coding model

    NASA Astrophysics Data System (ADS)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  12. Non-coding RNAs and complex distributed genetic networks

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2011-08-01

    In eukaryotic cells, the mRNA-protein interplay can be dramatically influenced by non-coding RNAs (ncRNAs). Although this new paradigm is now widely accepted, an understanding of the effect of ncRNAs on complex genetic networks is lacking. To clarify what may happen in this case, we propose a mean-field kinetic model describing the influence of ncRNA on a complex genetic network with a distributed architecture including mutual protein-mediated regulation of many genes transcribed into mRNAs. ncRNA is considered to associate with mRNAs and inhibit their translation and/or facilitate degradation. Our results are indicative of the richness of the kinetics under consideration. The main complex features are found to be bistability and oscillations. One could expect to find kinetic chaos as well. The latter feature has however not been observed in our calculations. In addition, we illustrate the difference in the regulation of distributed networks by mRNA and ncRNA.

  13. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  14. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  15. Model Policy on Student Publications Code.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Education, Des Moines.

    In 1989, the Iowa Legislature created a new code section that defines and regulates student exercise of free expression in "official school publications." Also, the Iowa State Department of Education was directed to develop a model publication code that includes reasonable provisions for regulating the time, place, and manner of student…

  16. Transmutation Fuel Performance Code Thermal Model Verification

    SciTech Connect

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  17. Two-dimensional MHD generator model. [GEN code

    SciTech Connect

    Geyer, H. K.; Ahluwalia, R. K.; Doss, E. D.

    1980-09-01

    A steady state, two-dimensional MHD generator code, GEN, is presented. The code solves the equations of conservation of mass, momentum, and energy, using a Von Mises transformation and a local linearization of the equations. By splitting the source terms into a part proportional to the axial pressure gradient and a part independent of the gradient, the pressure distribution along the channel is easily obtained to satisfy various criteria. Thus, the code can run effectively in both design modes, where the channel geometry is determined, and analysis modes, where the geometry is previously known. The code also employs a mixing length concept for turbulent flows, Cebeci and Chang's wall roughness model, and an extension of that model to the effective thermal diffusities. Results on code validation, as well as comparisons of skin friction and Stanton number calculations with experimental results, are presented.

  18. Diagnosis code assignment: models and evaluation metrics.

    PubMed

    Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie

    2014-01-01

    The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20,533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art.

  19. Diagnosis code assignment: models and evaluation metrics

    PubMed Central

    Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie

    2014-01-01

    Background and objective The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. Methods We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. Results The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20 533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Conclusions Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art. PMID:24296907

  20. Implementation of a Two-Phase Boiling Model into the RELAP5/MOD2 Computer Code to Predict Void Distribution in Low-Pressure Subcooled Boiling Flows

    SciTech Connect

    Yeoh, G.H.; Tu, J.Y.

    2002-02-15

    This paper demonstrates that the empirical models developed for subcooled flow boiling in RELAP5/MOD2 at high pressures are not valid for applications at low pressures. Modifications carried out in RELAP5/MOD2 to include better correlations of the interphase heat transfer and mean bubble diameter, and the wall heat flux partition model are shown to yield substantial improvements in the predictions of the axial void fraction distribution. When compared against experimental data covering a wide range of heat fluxes and flow rates, predicted axial void fraction profiles follow closely the measured data. Predictions made by the default subcooled boiling model show, however, an unacceptable margin of error with the experimental data.

  1. Energy distribution property and energy coding of a structural neural network

    PubMed Central

    Wang, Ziyin; Wang, Rubin

    2014-01-01

    Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory—energy coding. PMID:24600382

  2. Energy distribution property and energy coding of a structural neural network.

    PubMed

    Wang, Ziyin; Wang, Rubin

    2014-01-01

    Studying neural coding through neural energy is a novel view. In this paper, based on previously proposed single neuron model, the correlation between the energy consumption and the parameters of the cortex networks (amount of neurons, coupling strength, and transform delay) under an oscillational condition were researched. We found that energy distribution varies orderly as these parameters change, and it is closely related to the synchronous oscillation of the neural network. Besides, we compared this method with traditional method of relative coefficient, which shows energy method works equal to or better than the traditional one. It is novel that the synchronous activity and neural network parameters could be researched by assessing energy distribution and consumption. Therefore, the conclusion of this paper will refine the framework of neural coding theory and contribute to our understanding of the coding mechanism of the cerebral cortex. It provides a strong theoretical foundation of a novel neural coding theory-energy coding.

  3. Generation of Java code from Alvis model

    NASA Astrophysics Data System (ADS)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  4. Complex phylogenetic distribution of a non-canonical genetic code in green algae

    PubMed Central

    2010-01-01

    Background A non-canonical nuclear genetic code, in which TAG and TAA have been reassigned from stop codons to glutamine, has evolved independently in several eukaryotic lineages, including the ulvophycean green algal orders Dasycladales and Cladophorales. To study the phylogenetic distribution of the standard and non-canonical genetic codes, we generated sequence data of a representative set of ulvophycean green algae and used a robust green algal phylogeny to evaluate different evolutionary scenarios that may account for the origin of the non-canonical code. Results This study demonstrates that the Dasycladales and Cladophorales share this alternative genetic code with the related order Trentepohliales and the genus Blastophysa, but not with the Bryopsidales, which is sister to the Dasycladales. This complex phylogenetic distribution whereby all but one representative of a single natural lineage possesses an identical deviant genetic code is unique. Conclusions We compare different evolutionary scenarios for the complex phylogenetic distribution of this non-canonical genetic code. A single transition to the non-canonical code followed by a reversal to the canonical code in the Bryopsidales is highly improbable due to the profound genetic changes that coincide with codon reassignment. Multiple independent gains of the non-canonical code, as hypothesized for ciliates, are also unlikely because the same deviant code has evolved in all lineages. Instead we favor a stepwise acquisition model, congruent with the ambiguous intermediate model, whereby the non-canonical code observed in these green algal orders has a single origin. We suggest that the final steps from an ambiguous intermediate situation to a non-canonical code have been completed in the Trentepohliales, Dasycladales, Cladophorales and Blastophysa but not in the Bryopsidales. We hypothesize that in the latter lineage an initial stage characterized by translational ambiguity was not followed by final

  5. Distributed generation systems model

    SciTech Connect

    Barklund, C.R.

    1994-12-31

    A slide presentation is given on a distributed generation systems model developed at the Idaho National Engineering Laboratory, and its application to a situation within the Idaho Power Company`s service territory. The objectives of the work were to develop a screening model for distributed generation alternatives, to develop a better understanding of distributed generation as a utility resource, and to further INEL`s understanding of utility concerns in implementing technological change.

  6. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  7. Distributed magnetic field positioning system using code division multiple access

    NASA Technical Reports Server (NTRS)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  8. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code...

  9. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code...

  10. Practical distributed video coding in packet lossy channels

    NASA Astrophysics Data System (ADS)

    Qing, Linbo; Masala, Enrico; He, Xiaohai

    2013-07-01

    Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.

  11. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  12. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  13. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  14. Non-extensive trends in the size distribution of coding and non-coding DNA sequences in the human genome

    NASA Astrophysics Data System (ADS)

    Oikonomou, Th.; Provata, A.

    2006-03-01

    We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using non-extensive statistics. We show that the exponents governing the spatial decay of the coding size distributions vary between 5.2 ≤r ≤5.7 for the short scales and 1.45 ≤q ≤1.50 for the large scales. On the contrary, the exponents governing the spatial decay of the non-coding size distributions in these four chromosomes, take the values 2.4 ≤r ≤3.2 for the short scales and 1.50 ≤q ≤1.72 for the large scales. These results, in particular the values of the tail exponent q, indicate the existence of correlations in the coding and non-coding size distributions with tendency for higher correlations in the non-coding DNA.

  15. Distributed fuzzy system modeling

    SciTech Connect

    Pedrycz, W.; Chi Fung Lam, P.; Rocha, A.F.

    1995-05-01

    The paper introduces and studies an idea of distributed modeling treating it as a new paradigm of fuzzy system modeling and analysis. This form of modeling is oriented towards developing individual (local) fuzzy models for specific modeling landmarks (expressed as fuzzy sets) and determining the essential logical relationships between these local models. The models themselves are implemented in the form of logic processors being regarded as specialized fuzzy neural networks. The interaction between the processors is developed either in an inhibitory or excitatory way. In more descriptive way, the distributed model can be sought as a collection of fuzzy finite state machines with their individual local first or higher order memories. It is also clarified how the concept of distributed modeling narrows down a gap between purely numerical (quantitative) models and the qualitative ones originated within the realm of Artificial Intelligence. The overall architecture of distributed modeling is discussed along with the detailed learning schemes. The results of extensive simulation experiments are provided as well. 17 refs.

  16. Dual Cauchy rate-distortion model for video coding

    NASA Astrophysics Data System (ADS)

    Zeng, Huanqiang; Chen, Jing; Cai, Canhui

    2014-07-01

    A dual Cauchy rate-distortion model is proposed for video coding. In our approach, the coefficient distribution of the integer transform is first studied. Then, based on the observation that the rate-distortion model of the luminance and that of the chrominance can be well expressed by separate Cauchy functions, a dual Cauchy rate-distortion model is presented. Furthermore, the simplified rate-distortion formulas are deduced to reduce the computational complexity of the proposed model without losing the accuracy. Experimental results have shown that the proposed model is better able to approximate the actual rate-distortion curve for various sequences with different motion activities.

  17. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  18. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  19. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  20. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs

  1. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  2. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  3. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  4. 28 CFR 36.607 - Guidance concerning model codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  5. 28 CFR 36.608 - Guidance concerning model codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608... Codes § 36.608 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review the...

  6. Rapid installation of numerical models in multiple parent codes

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  7. Distributed reservation-based code division multiple access

    NASA Astrophysics Data System (ADS)

    Wieselthier, J. E.; Ephremides, A.

    1984-11-01

    The use of spread spectrum signaling, motivated primarily by its antijamming capabilities in military applications, leads naturally to the use of Code Division Multiple Access (CDMA) techniques that permit the successful simultaneous transmission by a number of users over a wideband channel. In this paper we address some of the major issues that are associated with the design of multiple access protocols for spread spectrum networks. We then propose, analyze, and evaluate a distributed reservation-based multiple access protocol that does in fact exploit CDMA properties. Especially significant is the fact that no acknowledgment or feedback information from the destination is required (thus facilitating communication with a radio-silent mode), nor is any form of coordination among the users necessary.

  8. Visualization of scattering angular distributions with the SAP code

    NASA Astrophysics Data System (ADS)

    Fernandez, J. E.; Scot, V.; Basile, S.

    2010-07-01

    SAP (Scattering Angular distribution Plot) is a graphical tool developed at the University of Bologna to compute and plot Rayleigh and Compton differential cross-sections (atomic and electronic), form-factors (FFs) and incoherent scattering functions (SFs) for single elements, compounds and mixture of compounds, for monochromatic excitation in the range of 1-1000 keV. The computation of FFs and SFs may be performed in two ways: (a) by interpolating Hubbell's data from EPDL97 library and (b) by using semi-empirical formulas as described in the text. Two kinds of normalization permit to compare the plots of different magnitudes, by imposing a similar scale. The characteristics of the code SAP are illustrated with one example.

  9. Modelling adipocytes size distribution.

    PubMed

    Soula, H A; Julienne, H; Soulage, C O; Géloën, A

    2013-09-07

    Adipocytes are cells whose task is to store excess energy as lipid droplets in their cytoplasm. Adipocytes can adapt their size according to the lipid amount to be stored. Adipocyte size variation can reach one order of magnitude inside the same organism which is unique among cells. A striking feature in adipocytes size distribution is the lack of characteristic size since typical size distributions are bimodal. Since energy can be stored and retrieved and adipocytes are responsible for these lipid fluxes, we propose a simple model of size-dependent lipid fluxes that is able to predict typical adipocytes size distribution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Model representation in the PANCOR wall interference assessment code

    NASA Technical Reports Server (NTRS)

    Al-Saadi, Jassim A.

    1991-01-01

    An investigation into the aircraft model description requirements of a wall interference assessment and correction code known as PANCOR was conducted. The accuracy necessary in specifying various elements of the model description were defined. It was found that the specified lift coefficient is the most important model parameter in the wind tunnel simulation. An accurate specification of the model volume was also found to be important. Also developed was a partially automated technique for generating wing lift distributions that are required as input to PANCOR. An existing three dimensional transonic small disturbance code was modified to provide the necessary information. A group of auxiliary computer programs and procedures was developed to help generate the required input for PANCOR.

  11. A study of oligonucleotide occurrence distributions in DNA coding segments.

    PubMed

    Castrignanò, T; Colosimo, A; Morante, S; Parisi, V; Rossi, G C

    1997-02-21

    In this paper we present a general strategy designed to study the occurrence frequency distributions of oligonucleotides in DNA coding segments and to deal with the problem of detecting possible patterns of genomic compositional inhomogeneities and disuniformities. Identifying specific tendencies or peculiar deviations in the distributions of the effective occurrence frequencies of oligonucleotides, with respect to what can be a priori expected, is of the greatest importance in biology. Differences between expected and actual distributions may in fact suggest or confirm the existence of specific biological mechanisms related to them. Similarly, a marked deviation in the occurrence frequency of an oligonucleotide may suggest that it belongs to the class of so-called "DNA signal (target) sequences". The approach we have elaborated is innovative in various aspects. Firstly, the analysis of the genomic data is carried out in the light of the observation that the distribution of the four nucleotides along the coding regions of the genoma is biased by the existence of a well-defined "reading frame". Secondly, the "experimental" numbers found by counting the occurrences of the various oligonucleotide sequences are appropriately corrected for the many kinds of mistakes and redundancies present in the available genetic Data Bases. A methodologically significant further improvement of our approach over the existing searching strategies is represented by the fact that, in order to decide whether or not the (corrected) "experimental" value of the occurrence frequency of a given oligonucleotide is within statistical expectations, a measure of the strength of the selective pressure, having acted on it in the course of the evolution, is assigned to the sequence, in a way that takes into account both the value of the "experimental" occurrence frequency of the sequence and the magnitude of the probability that this number might be the result of statistical fluctuations. If the

  12. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code... Chapter 3. (e) Materials standards Chapter 26. (f) Construction components Part III. (g) Glass Chapter 2... dwellings (NFPA 70A-1990)....

  13. Tokamak Simulation Code modeling of NSTX

    SciTech Connect

    S.C. Jardin; S. Kaye; J. Menard; C. Kessel; A.H. Glasser

    2000-07-20

    The Tokamak Simulation Code [TSC] is widely used for the design of new axisymmetric toroidal experiments. In particular, TSC was used extensively in the design of the National Spherical Torus eXperiment [NSTX]. The authors have now benchmarked TSC with initial NSTX results and find excellent agreement for plasma and vessel currents and magnetic flux loops when the experimental coil currents are used in the simulations. TSC has also been coupled with a ballooning stability code and with DCON to provide stability predictions for NSTX operation. TSC has also been used to model initial CHI experiments where a large poloidal voltage is applied to the NSTX vacuum vessel, causing a force-free current to appear in the plasma. This is a phenomenon that is similar to the plasma halo current that sometimes develops during a plasma disruption.

  14. Code Differentiation for Hydrodynamic Model Optimization

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  15. Population Coding of Visual Space: Modeling

    PubMed Central

    Lehky, Sidney R.; Sereno, Anne B.

    2011-01-01

    We examine how the representation of space is affected by receptive field (RF) characteristics of the encoding population. Spatial responses were defined by overlapping Gaussian RFs. These responses were analyzed using multidimensional scaling to extract the representation of global space implicit in population activity. Spatial representations were based purely on firing rates, which were not labeled with RF characteristics (tuning curve peak location, for example), differentiating this approach from many other population coding models. Because responses were unlabeled, this model represents space using intrinsic coding, extracting relative positions amongst stimuli, rather than extrinsic coding where known RF characteristics provide a reference frame for extracting absolute positions. Two parameters were particularly important: RF diameter and RF dispersion, where dispersion indicates how broadly RF centers are spread out from the fovea. For large RFs, the model was able to form metrically accurate representations of physical space on low-dimensional manifolds embedded within the high-dimensional neural population response space, suggesting that in some cases the neural representation of space may be dimensionally isomorphic with 3D physical space. Smaller RF sizes degraded and distorted the spatial representation, with the smallest RF sizes (present in early visual areas) being unable to recover even a topologically consistent rendition of space on low-dimensional manifolds. Finally, although positional invariance of stimulus responses has long been associated with large RFs in object recognition models, we found RF dispersion rather than RF diameter to be the critical parameter. In fact, at a population level, the modeling suggests that higher ventral stream areas with highly restricted RF dispersion would be unable to achieve positionally-invariant representations beyond this narrow region around fixation. PMID:21344012

  16. Creating Models for the ORIGEN Codes

    NASA Astrophysics Data System (ADS)

    Louden, G. D.; Mathews, K. A.

    1997-10-01

    Our research focused on the development of a methodology for creating reactor-specific cross-section libraries for nuclear reactor and nuclear fuel cycle analysis codes available from the Radiation Safety Information Computational Center. The creation of problem-specific models allows more detailed anlaysis than is possible using the generic models provided with ORIGEN2 and ORIGEN-S. A model of the Ohio State University Research Reactor was created using the Coupled 1-D Shielding Analysis (SAS2H) module of the Modular Code System for Performing Standardized Computer Analysis for Licensing Evaluation (SCALE4.3). Six different reactor core models were compared to identify the effect of changing the SAS2H Larger Unit Cell on the predicted isotopic composition of spent fuel. Seven different power histories were then applied to a Core-Average model to determine the ability of ORIGEN-S to distinguish spent fuel produced under varying operating conditions. Several actinide and fission product concentrations were identified which were sensitive to the power history, however the majority of the isotope concentrations were not dependent on operating history.

  17. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  18. Side information and noise learning for distributed video coding using optical flow and clustering.

    PubMed

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin; Forchhammer, Søren

    2012-12-01

    Distributed video coding (DVC) is a coding paradigm that exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers transform domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve the noise modeling. The optical flow technique is exploited at the decoder side to compensate for weaknesses of block-based methods, when using motion-compensation to generate side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded WZ frames. Different techniques are combined by calculating a number of candidate soft side information for low density parity check accumulate decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4-dB improvement or an average (Bjøntegaard) bit-rate savings of 37% is achieved compared with DISCOVER.

  19. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  20. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  1. A Mutation Model from First Principles of the Genetic Code.

    PubMed

    Thorvaldsen, Steinar

    2016-01-01

    The paper presents a neutral Codons Probability Mutations (CPM) model of molecular evolution and genetic decay of an organism. The CPM model uses a Markov process with a 20-dimensional state space of probability distributions over amino acids. The transition matrix of the Markov process includes the mutation rate and those single point mutations compatible with the genetic code. This is an alternative to the standard Point Accepted Mutation (PAM) and BLOcks of amino acid SUbstitution Matrix (BLOSUM). Genetic decay is quantified as a similarity between the amino acid distribution of proteins from a (group of) species on one hand, and the equilibrium distribution of the Markov chain on the other. Amino acid data for the eukaryote, bacterium, and archaea families are used to illustrate how both the CPM and PAM models predict their genetic decay towards the equilibrium value of 1. A family of bacteria is studied in more detail. It is found that warm environment organisms on average have a higher degree of genetic decay compared to those species that live in cold environments. The paper addresses a new codon-based approach to quantify genetic decay due to single point mutations compatible with the genetic code. The present work may be seen as a first approach to use codon-based Markov models to study how genetic entropy increases with time in an effectively neutral biological regime. Various extensions of the model are also discussed.

  2. Suppressing feedback in a distributed video coding system by employing real field codes

    NASA Astrophysics Data System (ADS)

    Louw, Daniel J.; Kaneko, Haruhiko

    2013-12-01

    Single-view distributed video coding (DVC) is a video compression method that allows for the computational complexity of the system to be shifted from the encoder to the decoder. The reduced encoding complexity makes DVC attractive for use in systems where processing power or energy use at the encoder is constrained, for example, in wireless devices and surveillance systems. One of the biggest challenges in implementing DVC systems is that the required rate must be known at the encoder. The conventional approach is to use a feedback channel from the decoder to control the rate. Feedback channels introduce their own difficulties such as increased latency and buffering requirements, which makes the resultant system unsuitable for some applications. Alternative approaches, which do not employ feedback, suffer from either increased encoder complexity due to performing motion estimation at the encoder, or an inaccurate rate estimate. Inaccurate rate estimates can result in a reduced average rate-distortion performance, as well as unpleasant visual artifacts. In this paper, the authors propose a single-view DVC system that does not require a feedback channel. The consequences of inaccuracies in the rate estimate are addressed by using codes defined over the real field and a decoder employing successive refinement. The result is a codec with performance that is comparable to that of a feedback-based system at low rates without the use of motion estimation at the encoder or a feedback path. The disadvantage of the approach is a reduction in average rate-distortion performance in the high-rate regime for sequences with significant motion.

  3. The Spatial Coding Model of Visual Word Identification

    ERIC Educational Resources Information Center

    Davis, Colin J.

    2010-01-01

    Visual word identification requires readers to code the identity and order of the letters in a word and match this code against previously learned codes. Current models of this lexical matching process posit context-specific letter codes in which letter representations are tied to either specific serial positions or specific local contexts (e.g.,…

  4. Projectile Two-dimensional Coordinate Measurement Method Based on Optical Fiber Coding Fire and its Coordinate Distribution Probability

    NASA Astrophysics Data System (ADS)

    Li, Hanshan; Lei, Zhiyong

    2013-01-01

    To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.

  5. FPGA based digital phase-coding quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu

    2015-12-01

    Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.

  6. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  7. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National...

  8. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  9. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National...

  10. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  11. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in accordance...

  12. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.

  13. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  14. Galactic Cosmic Ray Event-Based Risk Model (GERM) Code

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.

    2013-01-01

    This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic

  15. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  16. ER@CEBAF: Modeling code developments

    SciTech Connect

    Meot, F.; Roblin, Y.

    2016-04-13

    A proposal for a multiple-pass, high energy, energy-recovery experiment using CEBAF is under preparation in the frame of a JLab-BNL collaboration. In view of beam dynamics investigations regarding this project, in addition to the existing model in use in Elegant a version of CEBAF is developed in the stepwise ray-tracing code Zgoubi, Beyond the ER experiment, it is also planned to use the latter for the study of polarization transport in the presence of synchrotron radiation, down to Hall D line where a 12 GeV polarized beam can be delivered. This Note briefly reports on the preliminary steps, and preliminary outcomes, based on an Elegant to Zgoubi translation.

  17. Plutonium explosive dispersal modeling using the MACCS2 computer code

    SciTech Connect

    Steele, C.M.; Wald, T.L.; Chanin, D.I.

    1998-11-01

    The purpose of this paper is to derive the necessary parameters to be used to establish a defensible methodology to perform explosive dispersal modeling of respirable plutonium using Gaussian methods. A particular code, MACCS2, has been chosen for this modeling effort due to its application of sophisticated meteorological statistical sampling in accordance with the philosophy of Nuclear Regulatory Commission (NRC) Regulatory Guide 1.145, ``Atmospheric Dispersion Models for Potential Accident Consequence Assessments at Nuclear Power Plants``. A second advantage supporting the selection of the MACCS2 code for modeling purposes is that meteorological data sets are readily available at most Department of Energy (DOE) and NRC sites. This particular MACCS2 modeling effort focuses on the calculation of respirable doses and not ground deposition. Once the necessary parameters for the MACCS2 modeling are developed and presented, the model is benchmarked against empirical test data from the Double Tracks shot of project Roller Coaster (Shreve 1965) and applied to a hypothetical plutonium explosive dispersal scenario. Further modeling with the MACCS2 code is performed to determine a defensible method of treating the effects of building structure interaction on the respirable fraction distribution as a function of height. These results are related to the Clean Slate 2 and Clean Slate 3 bunkered shots of Project Roller Coaster. Lastly a method is presented to determine the peak 99.5% sector doses on an irregular site boundary in the manner specified in NRC Regulatory Guide 1.145 (1983). Parametric analyses are performed on the major analytic assumptions in the MACCS2 model to define the potential errors that are possible in using this methodology.

  18. Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.

  19. Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code

    SciTech Connect

    Rakhno, I. L.; Mokhov, N. V.; Gudima, K. K.

    2015-04-25

    An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.

  20. Statistical analysis of the distribution of amino acids in Borrelia burgdorferi genome under different genetic codes

    NASA Astrophysics Data System (ADS)

    García, José A.; Alvarez, Samantha; Flores, Alejandro; Govezensky, Tzipe; Bobadilla, Juan R.; José, Marco V.

    2004-10-01

    The genetic code is considered to be universal. In order to test if some statistical properties of the coding bacterial genome were due to inherent properties of the genetic code, we compared the autocorrelation function, the scaling properties and the maximum entropy of the distribution of distances of amino acids in sequences obtained by translating protein-coding regions from the genome of Borrelia burgdorferi, under different genetic codes. Overall our results indicate that these properties are very stable to perturbations made by altering the genetic code. We also discuss the evolutionary likely implications of the present results.

  1. Distributed Explosive Performance Model

    DTIC Science & Technology

    1998-01-01

    18 Analytic Code ( DEPAC ). DEPAC is a restructured and an upgraded one-stop code of the previous version of the Linear Explosive Array Performance...findings1. 3. Developed the initial version of DEPAC (LEAP and LAM) 3. 4. Released three Technical Results (TRs). 5. Established the methodology for quick...the input files for each run for CTH, process the data generated by CTH, and create the input database files for DEPAC . The line charge is composed of

  2. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  3. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  4. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  5. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  6. 40 CFR 194.23 - Models and computer codes.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  7. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR...

  8. A secure arithmetic coding based on Markov model

    NASA Astrophysics Data System (ADS)

    Duan, Lili; Liao, Xiaofeng; Xiang, Tao

    2011-06-01

    We propose a modification of the standard arithmetic coding that can be applied to multimedia coding standards at entropy coding stage. In particular, we introduce a randomized arithmetic coding scheme based on order-1 Markov model that achieves encryption by scrambling the symbols' order in the model and choosing the relevant order's probability randomly, which is done with higher compression efficiency and good security. Experimental results and security analyses indicate that the algorithm can not only resist to existing attacks based on arithmetic coding, but also be immune to other cryptanalysis.

  9. Minimum cost model energy code envelope requirements

    SciTech Connect

    Connor, C.C.; Lucas, R.G.; Turchen, S.J.

    1994-08-01

    This paper describes the analysis underlying development of the U.S. Department of Energy`s proposed revisions of the Council of American Building Officials (CABO) 1993 Model Energy Code (MEC) building thermal envelope requirements for single-family and low-rise multifamily residences. This analysis resulted in revised MEC envelope conservation levels based on an objective methodology that determined the minimum-cost combination of energy efficiency measures (EEMs) for residences in different locations around the United States. The proposed MEC revision resulted from a cost-benefit analysis from the consumer`s perspective. In this analysis, the costs of the EEMs were balanced against the benefit of energy savings. Detailed construction, financial, economic, and fuel cost data were compiled, described in a technical support document, and incorporated in the analysis. A cost minimization analysis was used to compare the present value of the total long-nm costs for several alternative EEMs and to select the EEMs that achieved the lowest cost for each location studied. This cost minimization was performed for 881 cities in the United States, and the results were put into the format used by the MEC. This paper describes the methodology for determining minimum-cost energy efficiency measures for ceilings, walls, windows, and floors and presents the results in the form of proposed revisions to the MEC. The proposed MEC revisions would, on average, increase the stringency of the MEC by about 10%.

  10. Genetic code: an alternative model of translation.

    PubMed

    Damjanović, Zvonimir M; Rakocević, Miloje M

    2005-06-01

    Our earlier studies of translation have led us to a specific numeric coding of nucleotides (A = 0, C = 1, G = 2, and U = 3)--that is, a quaternary numeric system; to ordering of digrams and codons (read right to left: .yx and Z.yx) as ordinal numbers from 000 to 111; and to seek hypothetic transformation of mRNA to 20 canonic amino acids. In this work, we show that amino acids match the ordinal number--that is, follow as transforms of their respective digrams and/or mRNA-codons. Sixteen digrams and their respective amino acids appear as a parallel (discrete) array. A first approximation of translation in this view is demonstrated by a "twisted" spiral on the side of "phantom" codons and by ordering amino acids in the form of a cross on the other side, whereby the transformation of digrams and/or phantom codons to amino acids appears to be one-to-one! Classification of canonical amino acids derived from our dynamic model clarifies physicochemical criteria, such as purinity, pyrimidinity, and particularly codon rules. The system implies both the rules of Siemion and Siemion and of Davidov, as well as balances of atomic and nucleon numbers within groups of amino acids. Formalization in this system offers the possibility of extrapolating backward to the initial organization of heredity.

  11. Astrophysical Plasmas: Codes, Models, and Observations

    NASA Astrophysics Data System (ADS)

    Canto, Jorge; Rodriguez, Luis F.

    2000-05-01

    The conference Astrophysical Plasmas: Codes, Models, and Observations was aimed at discussing the most recent advances, arid some of the avenues for future work, in the field of cosmical plasmas. It was held (hiring the week of October 25th to 29th 1999, at the Centro Nacional de las Artes (CNA) in Mexico City, Mexico it modern and impressive center of theaters and schools devoted to the performing arts. This was an excellent setting, for reviewing the present status of observational (both on earth and in space) arid theoretical research. as well as some of the recent advances of laboratory research that are relevant, to astrophysics. The demography of the meeting was impressive: 128 participants from 12 countries in 4 continents, a large fraction of them, 29% were women and most of them were young persons (either recent Ph.Ds. or graduate students). This created it very lively and friendly atmosphere that made it easy to move from the ionization of the Universe and high-redshift absorbers, to Active Galactic Nucleotides (AGN)s and X-rays from galaxies, to the gas in the Magellanic Clouds and our Galaxy, to the evolution of H II regions and Planetary Nebulae (PNe), and to the details of plasmas in the Solar System and the lab. All these topics were well covered with 23 invited talks, 43 contributed talks. and 22 posters. Most of them are contained in these proceedings, in the same order of the presentations.

  12. Simple models for reading neuronal population codes.

    PubMed Central

    Seung, H S; Sompolinsky, H

    1993-01-01

    In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models. PMID:8248166

  13. Environmental durability diagnostic for printed identification codes of polymer insulation for distribution pipelines

    NASA Astrophysics Data System (ADS)

    Zhuravleva, G. N.; Nagornova, I. V.; Kondratov, A. P.; Bablyuk, E. B.; Varepo, L. G.

    2017-08-01

    A research and modelling of weatherability and environmental durability of multilayer polymer insulation of both cable and pipelines with printed barcodes or color identification information were performed. It was proved that interlayer printing of identification codes in distribution pipelines insulation coatings provides high marking stability to light and atmospheric condensation. This allows to carry out their distant damage control. However, microbiological fouling of upper polymer layer hampers the distant damage pipelines identification. The color difference values and density changes of PE and PVC printed insolation due to weather and biological factors were defined.

  14. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  15. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  16. Distributed System Modeling Environment (DSME)

    DTIC Science & Technology

    1990-07-01

    34 Simulation tools, such as the Internetted System Modeling (ISM) system; * Distributed operating systems, such as Cronus and A1I)ha; • Distributed...RADC/COTD in this area is the Cronus distributed operating system. Cronus provides an architecture and tools for building and operating distributed...applications on a diverse set of machines. Cronus is more accurately identified as a distributed computing environment, since its role as a distributed

  17. SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters

    SciTech Connect

    Leal, L.C.

    1995-01-01

    The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  18. Review and verification of CARE 3 mathematical model and code

    NASA Technical Reports Server (NTRS)

    Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.

    1983-01-01

    The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.

  19. A New Solution of Distributed Disaster Recovery Based on Raptor Code

    NASA Astrophysics Data System (ADS)

    Deng, Kai; Wang, Kaiyun; Ma, Danyang

    For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.

  20. Numerical MHD codes for modeling astrophysical flows

    NASA Astrophysics Data System (ADS)

    Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.

    2016-05-01

    We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.

  1. A draft model aggregated code of ethics for bioethicists.

    PubMed

    Baker, Robert

    2005-01-01

    Bioethicists function in an environment in which their peers--healthcare executives, lawyers, nurses, physicians--assert the integrity of their fields through codes of professional ethics. Is it time for bioethics to assert its integrity by developing a code of ethics? Answering in the affirmative, this paper lays out a case by reviewing the historical nature and function of professional codes of ethics. Arguing that professional codes are aggregative enterprises growing in response to a field's historical experiences, it asserts that bioethics now needs to assert its integrity and independence and has already developed a body of formal statements that could be aggregated to create a comprehensive code of ethics for bioethics. A Draft Model Aggregated Code of Ethics for Bioethicists is offered in the hope that analysis and criticism of this draft code will promote further discussion of the nature and content of a code of ethics for bioethicists.

  2. Publicly Available Numerical Codes for Modeling the X-ray and Microwave Emissions from Solar and Stellar Activity

    NASA Technical Reports Server (NTRS)

    Holman, Gordon D.; Mariska, John T.; McTiernan, James M.; Ofman, Leon; Petrosian, Vahe; Ramaty, Reuven; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We have posted numerical codes on the Web for modeling the bremsstrahlung x-ray emission and the a gyrosynchrotron radio emission from solar and stellar activity. In addition to radiation codes, steady-state and time-dependent Fokker-Planck codes are provided for computing the distribution and evolution of accelerated electrons. A 1-D hydrodynamics code computes the response of the stellar atmosphere (chromospheric evaporation). A code for modeling gamma-ray line spectra is also available. On-line documentation is provided for each code. These codes have been developed for modeling results from the High Energy Solar Spectroscopic Imager (HESSI) along related microwave observations of solar flares. Comprehensive codes for modeling images and spectra of solar flares are under development. The posted codes can be obtained on NASA/Goddard's HESSI Web Site at http://hesperia.gsfc.nasa.gov/hessi/modelware.htm. This work is supported in part by the NASA Sun-Earth Connection Program.

  3. Publicly Available Numerical Codes for Modeling the X-ray and Microwave Emissions from Solar and Stellar Activity

    NASA Technical Reports Server (NTRS)

    Holman, Gordon D.; Mariska, John T.; McTiernan, James M.; Ofman, Leon; Petrosian, Vahe; Ramaty, Reuven; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    We have posted numerical codes on the Web for modeling the bremsstrahlung x-ray emission and the a gyrosynchrotron radio emission from solar and stellar activity. In addition to radiation codes, steady-state and time-dependent Fokker-Planck codes are provided for computing the distribution and evolution of accelerated electrons. A 1-D hydrodynamics code computes the response of the stellar atmosphere (chromospheric evaporation). A code for modeling gamma-ray line spectra is also available. On-line documentation is provided for each code. These codes have been developed for modeling results from the High Energy Solar Spectroscopic Imager (HESSI) along related microwave observations of solar flares. Comprehensive codes for modeling images and spectra of solar flares are under development. The posted codes can be obtained on NASA/Goddard's HESSI Web Site at http://hesperia.gsfc.nasa.gov/hessi/modelware.htm. This work is supported in part by the NASA Sun-Earth Connection Program.

  4. A Cooperative Downloading Method for VANET Using Distributed Fountain Code

    PubMed Central

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-01-01

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate. PMID:27754339

  5. A Cooperative Downloading Method for VANET Using Distributed Fountain Code.

    PubMed

    Liu, Jianhang; Zhang, Wenbin; Wang, Qi; Li, Shibao; Chen, Haihua; Cui, Xuerong; Sun, Yi

    2016-10-12

    Cooperative downloading is one of the effective methods to improve the amount of downloaded data in vehicular ad hoc networking (VANET). However, the poor channel quality and short encounter time bring about a high packet loss rate, which decreases transmission efficiency and fails to satisfy the requirement of high quality of service (QoS) for some applications. Digital fountain code (DFC) can be utilized in the field of wireless communication to increase transmission efficiency. For cooperative forwarding, however, processing delay from frequent coding and decoding as well as single feedback mechanism using DFC cannot adapt to the environment of VANET. In this paper, a cooperative downloading method for VANET using concatenated DFC is proposed to solve the problems above. The source vehicle and cooperative vehicles encodes the raw data using hierarchical fountain code before they send to the client directly or indirectly. Although some packets may be lost, the client can recover the raw data, so long as it receives enough encoded packets. The method avoids data retransmission due to packet loss. Furthermore, the concatenated feedback mechanism in the method reduces the transmission delay effectively. Simulation results indicate the benefits of the proposed scheme in terms of increasing amount of downloaded data and data receiving rate.

  6. Status report on the THROHPUT transient heat pipe modeling code

    SciTech Connect

    Hall, M.L.; Merrigan, M.A.; Reid, R.S.

    1993-11-01

    Heat pipes are structures which transport heat by the evaporation and condensation of a working fluid, giving them a high effective thermal conductivity. Many space-based uses for heat pipes have been suggested, and high temperature heat pipes using liquid metals as working fluids are especially attractive for these purposes. These heat pipes are modeled by the THROHPUT code (THROHPUT is an acronym for Thermal Hydraulic Response Of Heat Pipes Under Transients and is pronounced like ``throughput``). Improvements have been made to the THROHPUT code which models transient thermohydraulic heat pipe behavior. The original code was developed as a doctoral thesis research code by Hall. The current emphasis has been shifted from research into the numerical modeling to the development of a robust production code. Several modeling obstacles that were present in the original code have been eliminated, and several additional features have been added.

  7. SADDE (Scaled Absorbed Dose Distribution Evaluator): A code to generate input for VARSKIN

    SciTech Connect

    Reece, W.D.; Miller, S.D.; Durham, J.S.

    1989-01-01

    The VARSKIN computer code has been limited to the isotopes for which the scaled absorbed dose distributions were provided by the Medical Internal Radiation Dose (MIRD) Committee or to data that could be interpolated from isotopes that had similar spectra. This document describes the methodology to calculate the scaled absorbed dose distribution data for any isotope (including emissions by the daughter isotopes) and its implementation by a computer code called SADDE (Scaled Absorbed Dose Distribution Evaluator). The SADDE source code is provided along with input examples and verification calculations. 10 refs., 4 figs.

  8. Code and model extensions of the THATCH code for modular high temperature gas-cooled reactors

    SciTech Connect

    Kroger, P.G.; Kennett, R.J. )

    1993-05-01

    This report documents several model extensions and improvements of the THATCH code, a code to model thermal and fluid flow transients in High Temperature Gas-Cooled Reactors. A heat exchanger model was added, which can be used to represent the steam generator of the main Heat Transport System or the auxiliary Shutdown Cooling System. This addition permits the modeling of forced flow cooldown transients with the THATCH code. An enhanced upper head model, considering the actual conical and spherical shape of the upper plenum and reactor upper head was added, permitting more accurate modeling of the heat transfer in thisregion. The revised models are described, and the changes and addition to the input records are documented.

  9. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  10. Energy standards and model codes development, adoption, implementation, and enforcement

    SciTech Connect

    Conover, D.R.

    1994-08-01

    This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.

  11. Utilities for master source code distribution: MAX and Friends

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.

    1988-01-01

    MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.

  12. Source coding with escort distributions and Rényi entropy bounds

    NASA Astrophysics Data System (ADS)

    Bercher, J.-F.

    2009-08-01

    We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.

  13. Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding

    SciTech Connect

    Kronberg, D. A.; Molotkov, S. N.

    2010-07-15

    A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.

  14. Automatic code generation from the OMT-based dynamic model

    SciTech Connect

    Ali, J.; Tanaka, J.

    1996-12-31

    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  15. Aerosol kinetic code "AERFORM": Model, validation and simulation results

    NASA Astrophysics Data System (ADS)

    Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.

    2016-06-01

    The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.

  16. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.

  17. Modelling the distribution of salaries

    NASA Astrophysics Data System (ADS)

    Rawal, S.; Rodgers, G. J.; Yap, Y. J.

    2005-11-01

    In this paper, we study analytically a simple model of salary distributions where two individuals, (employees) who both work for the same organisation, compare salaries. The higher paid individual does nothing but the lower paid individual leaves the organisation and is replaced by another, whose salary is picked from a power law distribution. We find that the resulting distribution is also power law, but with a different exponent. We also introduce variations to this simple model and find that the resulting distribution is dependent on the distribution from which the new individuals salary is chosen from and also find that the exponent of the resulting distribution is dependent on the total number of individuals comparing salaries. Finally we compare the mean field version and a finite dimension 1-d version of the model by carrying out numerical simulations.

  18. Generic Distributed Systems Model

    DTIC Science & Technology

    1989-03-01

    networking of microcomputers or work- stations with a distributed system and a clear distinction between the two needs to be made. What is expected in a...INFORM.AT1ON PERTAI NING TO LOCATIONS AND POLICY CAN BE COMBINED WITH THE INITIAL DIAGRAM TO PRODUCE A PARTITIONED DFD. THE BOLD LINES REPRESENT SERVICES WHICH...PRA85] D.K. Pradhan, "Fault-tolerant. mIltiprocessor link and bus network Architectures," IEEE Trans. on Computers, Vol. 34, No. I, Jan. 1985, pp. 33

  19. A realistic model under which the genetic code is optimal.

    PubMed

    Buhrman, Harry; van der Gulik, Peter T S; Klau, Gunnar W; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-10-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By comparing this value with a distribution of values belonging to codes generated by random permutations of amino acid assignments, the level of error robustness of a genetic code can be quantified. We present a calculation in which the standard genetic code is shown to be optimal. We obtain this result by (1) using recently updated values of polar requirement as input; (2) fixing seven assignments (Ile, Trp, His, Phe, Tyr, Arg, and Leu) based on aptamer considerations; and (3) using known biosynthetic relations of the 20 amino acids. This last point is reflected in an approach of subdivision (restricting the random reallocation of assignments to amino acid subgroups, the set of 20 being divided in four such subgroups). The three approaches to explain robustness of the code (specific selection for robustness, amino acid-RNA interactions leading to assignments, or a slow growth process of assignment patterns) are reexamined in light of our findings. We offer a comprehensive hypothesis, stressing the importance of biosynthetic relations, with the code evolving from an early stage with just glycine and alanine, via intermediate stages, towards 64 codons carrying todays meaning.

  20. Cavitation Modeling in Euler and Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  1. MOMDIS: a Glauber model computer code for knockout reactions

    NASA Astrophysics Data System (ADS)

    Bertulani, C. A.; Gade, A.

    2006-09-01

    A computer program is described to calculate momentum distributions in stripping and diffraction dissociation reactions. A Glauber model is used with the scattering wavefunctions calculated in the eikonal approximation. The program is appropriate for knockout reactions at intermediate energy collisions ( 30 MeV⩽E/nucleon⩽2000 MeV). It is particularly useful for reactions involving unstable nuclear beams, or exotic nuclei (e.g., neutron-rich nuclei), and studies of single-particle occupancy probabilities (spectroscopic factors) and other related physical observables. Such studies are an essential part of the scientific program of radioactive beam facilities, as in for instance the proposed RIA (Rare Isotope Accelerator) facility in the US. Program summaryTitle of program: MOMDIS (MOMentum DIStributions) Catalogue identifier:ADXZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXZ_v1_0 Computers: The code has been created on an IBM-PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: 6255 No. of bytes in distributed program, including test data, etc.: 63 568 Distribution format: tar.gz Nature of physical problem: The program calculates bound wavefunctions, eikonal S-matrices, total cross-sections and momentum distributions of interest in nuclear knockout reactions at intermediate energies. Method of solution: Solves the radial Schrödinger equation for bound states. A Numerov integration is used outwardly and inwardly and a matching at the nuclear surface is done to obtain the energy and the bound state wavefunction with good accuracy. The S-matrices are obtained using eikonal wavefunctions and the "t- ρρ" method to obtain the eikonal phase-shifts. The momentum distributions are obtained by means of a Gaussian expansion of

  2. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Workman Mill Road, Whittier, California 90601. (2) National Electrical Code, NFPA 70, 1993 Edition... energy requirements for multifamily or care-type structures; and (iii) Those provisions of the model...

  3. High-capacity quantum Fibonacci coding for key distribution

    NASA Astrophysics Data System (ADS)

    Simon, David S.; Lawrence, Nate; Trevino, Jacob; Dal Negro, Luca; Sergienko, Alexander V.

    2013-03-01

    Quantum cryptography and quantum key distribution (QKD) have been the most successful applications of quantum information processing, highlighting the unique capability of quantum mechanics, through the no-cloning theorem, to securely share encryption keys between two parties. Here, we present an approach to high-capacity, high-efficiency QKD by exploiting cross-disciplinary ideas from quantum information theory and the theory of light scattering of aperiodic photonic media. We propose a unique type of entangled-photon source, as well as a physical mechanism for efficiently sharing keys. The key-sharing protocol combines entanglement with the mathematical properties of a recursive sequence to allow a realization of the physical conditions necessary for implementation of the no-cloning principle for QKD, while the source produces entangled photons whose orbital angular momenta (OAM) are in a superposition of Fibonacci numbers. The source is used to implement a particular physical realization of the protocol by randomly encoding the Fibonacci sequence onto entangled OAM states, allowing secure generation of long keys from few photons. Unlike in polarization-based protocols, reference frame alignment is unnecessary, while the required experimental setup is simpler than other OAM-based protocols capable of achieving the same capacity and its complexity grows less rapidly with increasing range of OAM used.

  4. EM modeling for GPIR using 3D FDTD modeling codes

    SciTech Connect

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  5. Fire modeling code comparisons. Final report

    SciTech Connect

    Mowrer, F.W.; Gautier, B.

    1998-09-01

    There is a significant effort taking place in the US nuclear power industry to provide an option for risk-informed/performance-based fire protection programs. One of the requirements for such a program is the ability to deterministically model the characteristics and consequences of a postulated fire in terms of initiation, growth and propagation. There are two general classifications of methods to accomplish this, namely computational fluid dynamics models and the more simplified zone models. For many applications, zone models will provide adequate results. Zone models have been used for probabilistic risk assessments and fire hazard analyses. However, a lack of comparative verification and established confidence in the results has limited their application. This report compares the features and capabilities of four zone type fire models: FIVE, CFAST, COMPBRNIIIe, and MAGIC. The main features of the models are documented in matrix form. The model are benchmarked against three series of existing large-scale fire tests.

  6. Verification of thermal analysis codes for modeling solid rocket nozzles

    NASA Technical Reports Server (NTRS)

    Keyhani, M.

    1993-01-01

    One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher

  7. Quantization and psychoacoustic model in audio coding in advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2011-10-01

    This paper presents complete optimized architecture of Advanced Audio Coder quantization with Huffman coding. After that psychoacoustic model theory is presented and few algorithms described: standard Two Loop Search, its modifications, Genetic, Just Noticeable Level Difference, Trellis-Based and its modification: Cascaded Trellis-Based Algorithm.

  8. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.

  9. Effect of error distribution in channel coding failure on MPEG wireless transmission

    NASA Astrophysics Data System (ADS)

    Robert, P. M.; Darwish, Ahmed M.; Reed, Jeffrey H.

    1998-12-01

    This paper examines the interaction between digital video and channel coding in a wireless communication system. Digital video is a high-bandwidth, computationally intensive application. The recent allocation of large tracks of spectrum by the FCC has made possible the design and implementation of personal wireless digital video devices for several applications, from personal communications to surveillance. A simulation tool was developed to explore the video/channel coding relationship. This tool simulates a packet-based digital wireless transmission in various noise and interference environments. The basic communications system models the DAVIC (Digital Audio-Visual Council) layout for the LMDS (Local Multipoint Distribution Service) system and includes several error control algorithms and a packetizing algorithm that is MPEG-compliant. The Bit-Error-Rate (BER) is a basic metric used in digital communications system design. This work presents simulation results that prove that BER is not a sufficient metric to predict video quality based on channel parameters. Evidence will be presented to show that the relative positioning of bit errors, regardless of absolute positioning and the relative occurrence of these bit error bursts are the main factors that must be observed in a physical layer to design a digital video wireless system.

  10. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  11. ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES

    SciTech Connect

    Poole, B R; Nelson, S D; Langdon, S

    2005-05-05

    The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.

  12. A unified model of the standard genetic code

    PubMed Central

    Morgado, Eberto R.

    2017-01-01

    The Rodin–Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model. PMID:28405378

  13. A unified model of the standard genetic code.

    PubMed

    José, Marco V; Zamudio, Gabriel S; Morgado, Eberto R

    2017-03-01

    The Rodin-Ohno (RO) and the Delarue models divide the table of the genetic code into two classes of aminoacyl-tRNA synthetases (aaRSs I and II) with recognition from the minor or major groove sides of the tRNA acceptor stem, respectively. These models are asymmetric but they are biologically meaningful. On the other hand, the standard genetic code (SGC) can be derived from the primeval RNY code (R stands for purines, Y for pyrimidines and N any of them). In this work, the RO-model is derived by means of group actions, namely, symmetries represented by automorphisms, assuming that the SGC originated from a primeval RNY code. It turns out that the RO-model is symmetric in a six-dimensional (6D) hypercube. Conversely, using the same automorphisms, we show that the RO-model can lead to the SGC. In addition, the asymmetric Delarue model becomes symmetric by means of quotient group operations. We formulate isometric functions that convert the class aaRS I into the class aaRS II and vice versa. We show that the four polar requirement categories display a symmetrical arrangement in our 6D hypercube. Altogether these results cannot be attained, neither in two nor in three dimensions. We discuss the present unified 6D algebraic model, which is compatible with both the SGC (based upon the primeval RNY code) and the RO-model.

  14. Molecular Code Division Multiple Access: Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Zamiri-Jafarian, Yeganeh

    Communications between nano-devices is an emerging research field in nanotechnology. Molecular Communication (MC), which is a bio-inspired paradigm, is a promising technique for communication in nano-network. In MC, molecules are administered to exchange information among nano-devices. Due to the nature of molecular signals, traditional communication methods can't be directly applied to the MC framework. The objective of this thesis is to present novel diffusion-based MC methods when multi nano-devices communicate with each other in the same environment. A new channel model and detection technique, along with a molecular-based access method, are proposed in here for communication between asynchronous users. In this work, the received molecular signal is modeled as a Gaussian mixture distribution when the MC system undergoes Brownian noise and inter-symbol interference (ISI). This novel approach demonstrates a suitable modeling for diffusion-based MC system. Using the proposed Gaussian mixture model, a simple receiver is designed by minimizing the error probability. To determine an optimum detection threshold, an iterative algorithm is derived which minimizes a linear approximation of the error probability function. Also, a memory-based receiver is proposed to improve the performance of the MC system by considering previously detected symbols in obtaining the threshold value. Numerical evaluations reveal that theoretical analysis of the bit error rate (BER) performance based on the Gaussian mixture model match simulation results very closely. Furthermore, in this thesis, molecular code division multiple access (MCDMA) is proposed to overcome the inter-user interference (IUI) caused by asynchronous users communicating in a shared propagation environment. Based on the selected molecular codes, a chip detection scheme with an adaptable threshold value is developed for the MCDMA system when the proposed Gaussian mixture model is considered. Results indicate that the

  15. Spin-glass models as error-correcting codes

    NASA Astrophysics Data System (ADS)

    Sourlas, Nicolas

    1989-06-01

    DURING the transmission of information, errors may occur because of the presence of noise, such as thermal noise in electronic signals or interference with other sources of radiation. One wants to recover the information with the minimum error possible. In theory this is possible by increasing the power of the emitter source. But as the cost is proportional to the energy fed into the channel, it costs less to code the message before sending it, thus including redundant 'coding' bits, and to decode at the end. Coding theory provides rigorous bounds on the cost-effectiveness of any code. The explicit codes proposed so far for practical applications do not saturate these bounds; that is, they do not achieve optimal cost-efficiency. Here we show that theoretical models of magnetically disordered materials (spin glasses) provide a new class of error-correction codes. Their cost performance can be calculated using the methods of statistical mechanics, and is found to be excellent. These models can, under certain circumstances, constitute the first known codes to saturate Shannon's well-known cost-performance bounds.

  16. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  17. LMFBR models for the ORIGEN2 computer code

    SciTech Connect

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  18. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding.

    PubMed

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions.

  19. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  20. SAMICS marketing and distribution model

    NASA Technical Reports Server (NTRS)

    1978-01-01

    A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.

  1. A grid-based coulomb collision model for PIC codes

    SciTech Connect

    Jones, M.E.; Lemons, D.S.; Mason, R.J.; Thomas, V.A.; Winske, D.

    1996-01-01

    A new method is presented to model the intermediate regime between collisionless and Coulobm collision dominated plasmas in particle-in-cell codes. Collisional processes between particles of different species are treated throuqh the concept of a grid-based {open_quotes}collision field,{close_quotes} which can be particularly efficient for multi-dimensional applications. In this method, particles are scattered using a force which is determined from the moments of the distribution functions accumulated on the grid. The form of the force is such to reproduce themulti-fluid transport equations through the second (energy) moment. Collisions between particles of the same species require a separate treatment. For this, a Monte Carlo-like scattering method based on the Langevin equation is used. The details of both methods are presented, and their implementation in a new hybrid (particle ion, massless fluid electron) algorithm is described. Aspects of the collision model are illustrated through several one- and two-dimensional test problems as well as examples involving laser produced colliding plasmas.

  2. Modeling Guidelines for Code Generation in the Railway Signaling Context

    NASA Technical Reports Server (NTRS)

    Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo

    2009-01-01

    Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these

  3. DSD - A Particle Simulation Code for Modeling Dusty Plasmas

    NASA Astrophysics Data System (ADS)

    Joyce, Glenn; Lampe, Martin; Ganguli, Gurudas

    1999-11-01

    The NRL Dynamically Shielded Dust code (DSD) is a particle simulation code developed to study the behavior of strongly coupled, dusty plasmas. The model includes the electrostatic wake effects of plasma ions flowing through plasma electrons, collisions of dust and plasma particles with each other and with neutrals. The simulation model contains the short-range strong forces of a shielded Coulomb system, and the long-range forces that are caused by the wake. It also includes other effects of a flowing plasma such as drag forces. In order to model strongly coupled dust in plasmas, we make use of the techniques of molecular dynamics simulation, PIC simulation, and the "particle-particle/particle-mesh" (P3M) technique of Hockney and Eastwood. We also make use of the dressed test particle representation of Rostoker and Rosenbluth. Many of the techniques we use in the model are common to all PIC plasma simulation codes. The unique properties of the code follow from the accurate representation of both the short-range aspects of the interaction between dust grains, and long-range forces mediated by the complete plasma dielectric response. If the streaming velocity is zero, the potential used in the model reduces to the Debye-Huckel potential, and the simulation is identical to molecular dynamics models of the Yukawa potential. The plasma appears only implicitly through the plasma dispersion function, so it is not necessary in the code to resolve the fast plasma time scales.

  4. Fluid-Rock Interaction Models: Code Release and Results

    NASA Astrophysics Data System (ADS)

    Bolton, E. W.

    2006-12-01

    Numerical models our group has developed for understanding the role of kinetic processes during fluid-rock interaction will be released free to the public. We will also present results that highlight the importance of kinetic processes. The author is preparing manuals describing the numerical methods used, as well as "how-to" guides for using the models. The release will include input files, full in-line code documentation of the FORTRAN source code, and instructions for use of model output for visualization and analysis. The aqueous phase (weathering) and supercritical (mixed-volatile metamorphic) fluid flow and reaction models for porous media will be released separately. These codes will be useful as teaching and research tools. The codes may be run on current generation personal computers. Although other codes are available for attacking some of the problems we address, unique aspects of our codes include sub-grid-scale grain models to track grain size changes, as well as dynamic porosity and permeability. Also, as the flow field can change significantly over the course of the simulation, efficient solution methods have been developed for the repeated solution of Poisson-type equations that arise from Darcy's law. These include sparse-matrix methods as well as the even more efficient spectral-transform technique. Results will be presented for kinetic control of reaction pathways and for heterogeneous media. Codes and documentation for modeling intra-grain diffusion of trace elements and isotopes, and exchange of these between grains and moving fluids will also be released. The unique aspect of this model is that it includes concurrent diffusion and grain growth or dissolution for multiple mineral types (low-diffusion regridding has been developed to deal with the moving-boundary problem at the fluid/mineral interface). Results for finite diffusion rates will be compared to batch and fractional melting models. Additional code and documentation will be released

  5. Implementation of a new model for gravitational collision cross sections in nuclear aerosol codes

    SciTech Connect

    Buckley, R.L.; Loyalka, S.K.

    1995-03-01

    Models currently used in aerosol source codes for the gravitational collision efficiency are deficient in not accounting fully for two particle hydrodynamics (interception and inertia), which becomes important for larger particles. A computer code that accounts for these effects in calculating particle trajectories is used to find values of efficiency for a range of particle sizes. Simple fits to these data as a function of large particle diameter for a given particle diameter ratio are then obtained using standard linear regression, and a new model is constructed. This model is then implemented into two computer codes. AEROMECH and CONTAIN, Version 1.2 For a test problem, concentration distributions obtained with the new model and the standard model for efficiency are found to be markedly different.

  6. Modeling Natural Variation through Distribution

    ERIC Educational Resources Information Center

    Lehrer, Richard; Schauble, Leona

    2004-01-01

    This design study tracks the development of student thinking about natural variation as late elementary grade students learned about distribution in the context of modeling plant growth at the population level. The data-modeling approach assisted children in coordinating their understanding of particular cases with an evolving notion of data as an…

  7. Modeling Natural Variation through Distribution

    ERIC Educational Resources Information Center

    Lehrer, Richard; Schauble, Leona

    2004-01-01

    This design study tracks the development of student thinking about natural variation as late elementary grade students learned about distribution in the context of modeling plant growth at the population level. The data-modeling approach assisted children in coordinating their understanding of particular cases with an evolving notion of data as an…

  8. A model of a code of ethics for tissue banks operating in developing countries.

    PubMed

    Morales Pedraza, Jorge

    2012-12-01

    Ethical practice in the field of tissue banking requires the setting of principles, the identification of possible deviations and the establishment of mechanisms that will detect and hinder abuses that may occur during the procurement, processing and distribution of tissues for transplantation. This model of a Code of Ethics has been prepared with the purpose of being used for the elaboration of a Code of Ethics for tissue banks operating in the Latin American and the Caribbean, Asia and the Pacific and the African regions in order to guide the day-to-day operation of these banks. The purpose of this model of Code of Ethics is to assist interested tissue banks in the preparation of their own Code of Ethics towards ensuring that the tissue bank staff support with their actions the mission and values associated with tissue banking.

  9. Modeled ground water age distributions

    USGS Publications Warehouse

    Woolfenden, Linda R.; Ginn, Timothy R.

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  10. Modeled ground water age distributions.

    PubMed

    Woolfenden, Linda R; Ginn, Timothy R

    2009-01-01

    The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.

  11. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    NASA Technical Reports Server (NTRS)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  12. A versatile palindromic amphipathic repeat coding sequence horizontally distributed among diverse bacterial and eucaryotic microbes.

    PubMed

    Röske, Kerstin; Foecking, Mark F; Yooseph, Shibu; Glass, John I; Calcutt, Michael J; Wise, Kim S

    2010-07-13

    Intragenic tandem repeats occur throughout all domains of life and impart functional and structural variability to diverse translation products. Repeat proteins confer distinctive surface phenotypes to many unicellular organisms, including those with minimal genomes such as the wall-less bacterial monoderms, Mollicutes. One such repeat pattern in this clade is distributed in a manner suggesting its exchange by horizontal gene transfer (HGT). Expanding genome sequence databases reveal the pattern in a widening range of bacteria, and recently among eucaryotic microbes. We examined the genomic flux and consequences of the motif by determining its distribution, predicted structural features and association with membrane-targeted proteins. Using a refined hidden Markov model, we document a 25-residue protein sequence motif tandemly arrayed in variable-number repeats in ORFs lacking assigned functions. It appears sporadically in unicellular microbes from disparate bacterial and eucaryotic clades, representing diverse lifestyles and ecological niches that include host parasitic, marine and extreme environments. Tracts of the repeats predict a malleable configuration of recurring domains, with conserved hydrophobic residues forming an amphipathic secondary structure in which hydrophilic residues endow extensive sequence variation. Many ORFs with these domains also have membrane-targeting sequences that predict assorted topologies; others may comprise reservoirs of sequence variants. We demonstrate expressed variants among surface lipoproteins that distinguish closely related animal pathogens belonging to a subgroup of the Mollicutes. DNA sequences encoding the tandem domains display dyad symmetry. Moreover, in some taxa the domains occur in ORFs selectively associated with mobile elements. These features, a punctate phylogenetic distribution, and different patterns of dispersal in genomes of related taxa, suggest that the repeat may be disseminated by HGT and intra

  13. Code System to Model Aqueous Geochemical Equilibria.

    SciTech Connect

    PETERSON, S. R.

    2001-08-23

    Version: 00 MINTEQ is a geochemical program to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. It was developed for the Environmental Protection Agency to perform the calculations necessary to simulate the contact of waste solutions with heterogeneous sediments or the interaction of ground water with solidified wastes. MINTEQ can calculate ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution ofsolid phases. MINTEQ can accept a finite mass for any solid considered for dissolution and will dissolve the specified solid phase only until its initial mass is exhausted. This ability enables MINTEQ to model flow-through systems. In these systems the masses of solid phases that precipitate at earlier pore volumes can be dissolved at later pore volumes according to thermodynamic constraints imposed by the solution composition and solid phases present. The ability to model these systems permits evaluation of the geochemistry of dissolved traced metals, such as low-level waste in shallow land burial sites. MINTEQ was designed to solve geochemical equilibria for systems composed of one kilogram of water, various amounts of material dissolved in solution, and any solid materials that are present. Systems modeled using MINTEQ can exchange energy and material (open systems) or just energy (closed systems) with the surrounding environment. Each system is composed of a number of phases. Every phase is a region with distinct composition and physically definable boundaries. All of the material in the aqueous solution forms one phase. The gas phase is composed of any gaseous material present, and each compositionally and structurally distinct solid forms a separate phase.

  14. The overlap model: a model of letter position coding.

    PubMed

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-07-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that the position of each letter within a word is perfectly encoded. Thus, these models are unable to explain the presence of effects of letter transposition (trial-trail), letter migration (beard-bread), repeated letters (moose-mouse), or subset/superset effects (faulty-faculty). The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with these effects. The basic assumption is that letters in the visual stimulus have distributions over positions so that the representation of one letter will extend into adjacent letter positions. To test the model, the authors conducted a series of forced-choice perceptual identification experiments. The overlap model produced very good fits to the empirical data, and even a simplified 2-parameter model was capable of producing fits for 104 observed data points with a correlation coefficient of .91. Copyright (c) 2008 APA, all rights reserved.

  15. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Johnson, Sarah J.; Lance, Andrew M.; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Ralph, T. C.; Symul, Thomas

    2017-02-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates.

  16. Synonymous Substitutions in the Xdh Gene of Drosophila: Heterogeneous Distribution along the Coding Region

    PubMed Central

    Comeron, J. M.; Aguade, M.

    1996-01-01

    The Xdh (rosy) region of Drosophila subobscura has been sequenced and compared to the homologous region of D. pseudoobscura and D. melanogaster. Estimates of the numbers of synonymous substitutions per site (Ks) confirm that Xdh has a high synonymous substitution rate. The distributions of both nonsynonymous and synonymous substitutions along the coding region were found to be heterogeneous. Also, no relationship has been detected between Ks estimates and codon usage bias along the gene, in contrast with the generally observed relationship among genes. This heterogeneous distribution of synonymous substitutions along the Xdh gene, which is expression-level independent, could be explained by a differential selection pressure on synonymous sites along the coding region acting on mRNA secondary structure. The synonymous rate in the Xdh coding region is lower in the D. subobscura than in the D. pseudoobscura lineage, whereas the reverse is true for the Adh gene. PMID:8913749

  17. Hybrid decode-amplify-forward (HDAF) scheme in distributed Alamouti-coded cooperative network

    NASA Astrophysics Data System (ADS)

    Gurrala, Kiran Kumar; Das, Susmita

    2015-05-01

    In this article, a signal-to-noise ratio (SNR)-based hybrid decode-amplify-forward scheme in a distributed Alamouti-coded cooperative network is proposed. Considering a flat Rayleigh fading channel environment, the MATLAB simulation and analysis are carried out. In the cooperative scheme, two relays are employed, where each relay is transmitting each row Alamouti code. The selection of SNR threshold depends on the target rate information. The closed form expressions of symbol error rate (SER), the outage probability and average channel capacity with tight upper bounds are derived and compared with the simulation done in MATLAB environment. Furthermore, the impact of relay location on the SER performance is analysed. It is observed that the proposed hybrid relaying technique outperforms the individual amplify and forward and decode and forward ones in the distributed Alamouti-coded cooperative network.

  18. Coupling extended magnetohydrodynamic fluid codes with radiofrequency ray tracing codes for fusion modeling

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Held, Eric D.

    2015-09-01

    Neoclassical tearing modes are macroscopic (L ∼ 1 m) instabilities in magnetic fusion experiments; if unchecked, these modes degrade plasma performance and may catastrophically destroy plasma confinement by inducing a disruption. Fortunately, the use of properly tuned and directed radiofrequency waves (λ ∼ 1 mm) can eliminate these modes. Numerical modeling of this difficult multiscale problem requires the integration of separate mathematical models for each length and time scale (Jenkins and Kruger, 2012 [21]); the extended MHD model captures macroscopic plasma evolution while the RF model tracks the flow and deposition of injected RF power through the evolving plasma profiles. The scale separation enables use of the eikonal (ray-tracing) approximation to model the RF wave propagation. In this work we demonstrate a technique, based on methods of computational geometry, for mapping the ensuing RF data (associated with discrete ray trajectories) onto the finite-element/pseudospectral grid that is used to model the extended MHD physics. In the new representation, the RF data can then be used to construct source terms in the equations of the extended MHD model, enabling quantitative modeling of RF-induced tearing mode stabilization. Though our specific implementation uses the NIMROD extended MHD (Sovinec et al., 2004 [22]) and GENRAY RF (Smirnov et al., 1994 [23]) codes, the approach presented can be applied more generally to any code coupling requiring the mapping of ray tracing data onto Eulerian grids.

  19. Model-building codes for membrane proteins.

    SciTech Connect

    Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S.; Slepoy, Alexander; Sale, Kenneth L.; Young, Malin M.; Faulon, Jean-Loup Michel; Gray, Genetha Anne

    2005-01-01

    We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.

  20. Offset Manchester coding for Rayleigh noise suppression in carrier-distributed WDM-PONs

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Yu, Xiangyu; Lu, Weichao; Qu, Fengzhong; Deng, Ning

    2015-07-01

    We propose a novel offset Manchester coding in upstream to simultaneously realize Rayleigh noise suppression and differential detection in a carrier-distributed wavelength division multiplexed passive optical network. Error-free transmission of 2.5-Gb/s upstream signals over 50-km standard single mode fiber is experimentally demonstrated, with a 7-dB enhanced tolerance to Rayleigh noise.

  1. Reduced Fast Ion Transport Model For The Tokamak Transport Code TRANSP

    SciTech Connect

    Podesta,, Mario; Gorelenkova, Marina; White, Roscoe

    2014-02-28

    Fast ion transport models presently implemented in the tokamak transport code TRANSP [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, CEC Brussels, 1 , 19 (1980)] are not capturing important aspects of the physics associated with resonant transport caused by instabilities such as Toroidal Alfv en Eigenmodes (TAEs). This work describes the implementation of a fast ion transport model consistent with the basic mechanisms of resonant mode-particle interaction. The model is formulated in terms of a probability distribution function for the particle's steps in phase space, which is consistent with the MonteCarlo approach used in TRANSP. The proposed model is based on the analysis of fast ion response to TAE modes through the ORBIT code [R. B. White et al., Phys. Fluids 27 , 2455 (1984)], but it can be generalized to higher frequency modes (e.g. Compressional and Global Alfv en Eigenmodes) and to other numerical codes or theories.

  2. Multiview coding mode decision with hybrid optimal stopping model.

    PubMed

    Zhao, Tiesong; Kwong, Sam; Wang, Hanli; Wang, Zhou; Pan, Zhaoqing; Kuo, C-C Jay

    2013-04-01

    In a generic decision process, optimal stopping theory aims to achieve a good tradeoff between decision performance and time consumed, with the advantages of theoretical decision-making and predictable decision performance. In this paper, optimal stopping theory is employed to develop an effective hybrid model for the mode decision problem, which aims to theoretically achieve a good tradeoff between the two interrelated measurements in mode decision, as computational complexity reduction and rate-distortion degradation. The proposed hybrid model is implemented and examined with a multiview encoder. To support the model and further promote coding performance, the multiview coding mode characteristics, including predicted mode probability and estimated coding time, are jointly investigated with inter-view correlations. Exhaustive experimental results with a wide range of video resolutions reveal the efficiency and robustness of our method, with high decision accuracy, negligible computational overhead, and almost intact rate-distortion performance compared to the original encoder.

  3. Anisotropic Resistivity Forward Modelling Using Automatic Generated Higher-order Finite Element Codes

    NASA Astrophysics Data System (ADS)

    Wang, W.; Liu, J.

    2016-12-01

    Forward modelling is the general way to obtain responses of geoelectrical structures. Field investigators might find it useful for planning surveys and choosing optimal electrode configurations with respect to their targets. During the past few decades much effort has been put into the development of numerical forward codes, such as integral equation method, finite difference method and finite element method. Nowadays, most researchers prefer the finite element method (FEM) for its flexible meshing scheme, which can handle models with complex geometry. Resistivity Modelling with commercial sofewares such as ANSYS and COMSOL is convenient, but like working with a black box. Modifying the existed codes or developing new codes is somehow a long period. We present a new way to obtain resistivity forward modelling codes quickly, which is based on the commercial sofeware FEPG (Finite element Program Generator). Just with several demanding scripts, FEPG could generate FORTRAN program framework which can easily be altered to adjust our targets. By supposing the electric potential is quadratic in each element of a two-layer model, we obtain quite accurate results with errors less than 1%, while more than 5% errors could appear by linear FE codes. The anisotropic half-space model is supposed to concern vertical distributed fractures. The measured apparent resistivities along the fractures are bigger than results from its orthogonal direction, which are opposite of the true resistivities. Interpretation could be misunderstood if this anisotropic paradox is ignored. The technique we used can obtain scientific codes in a short time. The generated powerful FORTRAN codes could reach accurate results by higher-order assumption and can handle anisotropy to make better interpretations. The method we used could be expand easily to other domain where FE codes are needed.

  4. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  5. Data model description for the DESCARTES and CIDER codes

    SciTech Connect

    Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.

    1993-01-01

    The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes.

  6. Quasispecies distribution of Eigen model

    NASA Astrophysics Data System (ADS)

    Chen, Jia; Li, Sheng; Ma, Hong-Ru

    2007-09-01

    We have studied sharp peak landscapes of the Eigen model from a new perspective about how the quasispecies are distributed in the sequence space. To analyse the distribution more carefully, we bring in two tools. One tool is the variance of Hamming distance of the sequences at a given generation. It not only offers us a different avenue for accurately locating the error threshold and illustrates how the configuration of the distribution varies with copying fidelity q in the sequence space, but also divides the copying fidelity into three distinct regimes. The other tool is the similarity network of a certain Hamming distance d0, by which we can gain a visual and in-depth result about how the sequences are distributed. We find that there are several local similarity optima around the centre (global similarity optimum) in the distribution of the sequences reproduced near the threshold. Furthermore, it is interesting that the distribution of clustering coefficient C(k) follows lognormal distribution and the curve of clustering coefficient C of the network versus d0 appears to be linear near the threshold.

  7. Cost effectiveness of the 1995 model energy code in Massachusetts

    SciTech Connect

    Lucas, R.G.

    1996-02-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1995 Model Energy Code (MEC) building thermal-envelope requirements for single-family houses and multifamily housing units in Massachusetts. The goal was to compare the cost effectiveness of the 1995 MEC to the energy conservation requirements of the Massachusetts State Building Code-based on a comparison of the costs and benefits associated with complying with each.. This comparison was performed for three cities representing three geographical regions of Massachusetts--Boston, Worcester, and Pittsfield. The analysis was done for two different scenarios: a ``move-up`` home buyer purchasing a single-family house and a ``first-time`` financially limited home buyer purchasing a multifamily condominium unit. Natural gas, oil, and electric resistance heating were examined. The Massachusetts state code has much more stringent requirements if electric resistance heating is used rather than other heating fuels and/or equipment types. The MEC requirements do not vary by fuel type. For single-family homes, the 1995 MEC has requirements that are more energy-efficient than the non-electric resistance requirements of the current state code. For multifamily housing, the 1995 MEC has requirements that are approximately equally energy-efficient to the non-electric resistance requirements of the current state code. The 1995 MEC is generally not more stringent than the electric resistance requirements of the state code, in fact; for multifamily buildings the 1995 MEC is much less stringent.

  8. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  9. A combinatorial model for dentate gyrus sparse coding

    SciTech Connect

    Severa, William; Parekh, Ojas; James, Conrad D.; Aimone, James B.

    2016-12-29

    The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for two notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.

  10. A combinatorial model for dentate gyrus sparse coding

    DOE PAGES

    Severa, William; Parekh, Ojas; James, Conrad D.; ...

    2016-12-29

    The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for twomore » notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.« less

  11. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each

  12. Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Rizk, Yehia M.

    1999-01-01

    The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each

  13. Multiple description distributed image coding with side information for mobile wireless transmission

    NASA Astrophysics Data System (ADS)

    Wu, Min; Song, Daewon; Chen, Chang Wen

    2005-03-01

    Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet

  14. Multisynaptic activity in a pyramidal neuron model and neural code.

    PubMed

    Ventriglia, Francesco; Di Maio, Vito

    2006-01-01

    The highly irregular firing of mammalian cortical pyramidal neurons is one of the most striking observation of the brain activity. This result affects greatly the discussion on the neural code, i.e. how the brain codes information transmitted along the different cortical stages. In fact it seems to be in favor of one of the two main hypotheses about this issue, named the rate code. But the supporters of the contrasting hypothesis, the temporal code, consider this evidence inconclusive. We discuss here a leaky integrate-and-fire model of a hippocampal pyramidal neuron intended to be biologically sound to investigate the genesis of the irregular pyramidal firing and to give useful information about the coding problem. To this aim, the complete set of excitatory and inhibitory synapses impinging on such a neuron has been taken into account. The firing activity of the neuron model has been studied by computer simulation both in basic conditions and allowing brief periods of over-stimulation in specific regions of its synaptic constellation. Our results show neuronal firing conditions similar to those observed in experimental investigations on pyramidal cortical neurons. In particular, the variation coefficient (CV) computed from the inter-spike intervals (ISIs) in our simulations for basic conditions is close to the unity as that computed from experimental data. Our simulation shows also different behaviors in firing sequences for different frequencies of stimulation.

  15. Error control in the GCF: An information-theoretic model for error analysis and coding

    NASA Technical Reports Server (NTRS)

    Adeyemi, O.

    1974-01-01

    The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.

  16. 24 CFR 200.926b - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.926b Section 200.926b Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  17. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.925c Section 200.925c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT OF HOUSING AND...

  18. Testing geochemical modeling codes using New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of selected portions of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will: (1) ensure that we are providing adequately for all significant processes occurring in natural systems; (2) determine the adequacy of the mathematical descriptions of the processes; (3) check the adequacy and completeness of thermodynamic data as a function of temperature for solids, aqueous species and gases; and (4) determine the sensitivity of model results to the manner in which the problem is conceptualized by the user and then translated into constraints in the code input. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions. The kinetics of silica precipitation in EQ6 will be tested using field data from silica-lined drain channels carrying hot water away from the Wairakei borefield.

  19. Video distribution system cost model

    NASA Technical Reports Server (NTRS)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  20. General Description of Fission Observables: GEF Model Code

    NASA Astrophysics Data System (ADS)

    Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.

    2016-01-01

    The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  1. General Description of Fission Observables: GEF Model Code

    SciTech Connect

    Schmidt, K.-H.; Schmitt, C.

    2016-01-15

    The GEF (“GEneral description of Fission observables”) model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is

  2. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  3. Modeling of the EAST ICRF antenna with ICANT Code

    SciTech Connect

    Qin Chengming; Zhao Yanping; Colas, L.; Heuraux, S.

    2007-09-28

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  4. Modelling binary rotating stars by new population synthesis code bonnfires

    NASA Astrophysics Data System (ADS)

    Lau, H. H. B.; Izzard, R. G.; Schneider, F. R. N.

    2013-02-01

    bonnfires, a new generation of population synthesis code, can calculate nuclear reaction, various mixing processes and binary interaction in a timely fashion. We use this new population synthesis code to study the interplay between binary mass transfer and rotation. We aim to compare theoretical models with observations, in particular the surface nitrogen abundance and rotational velocity. Preliminary results show binary interactions may explain the formation of nitrogen-rich slow rotators and nitrogen-poor fast rotators, but more work needs to be done to estimate whether the observed frequencies of those stars can be matched.

  5. Modeling of the EAST ICRF antenna with ICANT Code

    NASA Astrophysics Data System (ADS)

    Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.

    2007-09-01

    A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.

  6. Development of a fan model for the CONTAIN code

    SciTech Connect

    Pevey, R.E.

    1987-01-08

    A fan model has been added to the CONTAIN code with a minimum of disruption of the standard CONTAIN calculation sequence. The user is required to supply a simple pressure vs. flow rate curve for each fan in his model configuration. Inclusion of the fan model required modification to two CONTAIN subroutines, IFLOW and EXEQNX. The two modified routines and the resulting executable module are located on the LANL mass storage system as /560007/iflow, /560007/exeqnx, and /560007/cont01, respectively. The model has been initially validated using a very simple sample problem and is ready for a more complete workout using the SRP reactor models from the RSRD probabilistic risk analysis.

  7. Self-shielding models of MICROX-2 code

    SciTech Connect

    Hou, J.; Ivanov, K.; Choi, H.

    2013-07-01

    The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. In the previous study, a new fine-group cross section library of the MICROX-2 was generated and tested against reference calculations and measurement data. In this study, existing physics models of the MICROX-2 are reviewed and updated to improve the physics calculation performance of the MICROX-2 code, including the resonance self-shielding model and spatial self-shielding factor. The updated self-shielding models have been verified through a series of benchmark calculations against the Monte Carlo code, using homogeneous and pin cell models selected for this study. The results have shown that the updates of the self-shielding factor calculation model are correct and improve the physics calculation accuracy even though the magnitude of error reduction is relatively small. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by approximately 0.1 % and 0.2% for the homogeneous and pin cell models, respectively, considered in this study. (authors)

  8. An analytical model of gene evolution with 9 mutation parameters: an application to the amino acids coded by the common circular code.

    PubMed

    Michel, Christian J

    2007-02-01

    We develop here an analytical evolutionary model based on a trinucleotide mutation matrix 64 x 64 with nine substitution parameters associated with the three types of substitutions in the three trinucleotide sites. It generalizes the previous models based on the nucleotide mutation matrices 4 x 4 and the trinucleotide mutation matrix 64 x 64 with three and six parameters. It determines at some time t the exact occurrence probabilities of trinucleotides mutating randomly according to these nine substitution parameters. An application of this model allows an evolutionary study of the common circular code [Formula: see text] of eukaryotes and prokaryotes and its 12 coded amino acids. The main property of this code [Formula: see text] is the retrieval of the reading frames in genes, both locally, i.e. anywhere in genes and in particular without a start codon, and automatically with a window of a few nucleotides. However, since its identification in 1996, amino acid information coded by [Formula: see text] has never been studied. Very unexpectedly, this evolutionary model demonstrates that random substitutions in this code [Formula: see text] and with particular values for the nine substitutions parameters retrieve after a certain time of evolution a frequency distribution of these 12 amino acids very close to the one coded by the actual genes.

  9. Non-contact assessment of melanin distribution via multispectral temporal illumination coding

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.

    2015-03-01

    Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).

  10. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  11. Examination of nanoparticle dispersion using a novel GPU based radial distribution function code

    NASA Astrophysics Data System (ADS)

    Rosch, Thomas; Wade, Matthew; Phelan, Frederick

    We have developed a novel GPU-based code that rapidly calculates radial distribution function (RDF) for an entire system, with no cutoff, ensuring accuracy. Built on top of this code, we have developed tools to calculate the second virial coefficient (B2) and the structure factor from the RDF, two properties that are directly related to the dispersion of nanoparticles in nancomposite systems. We validate the RDF calculations by comparison with previously published results, and also show how our code, which takes into account bonding in polymeric systems, enables more accurate predictions of g(r) than current state of the art GPU-based RDF codes currently available for these systems. In addition, our code reduces the computational time by approximately an order of magnitude compared to CPU-based calculations. We demonstrate the application of our toolset by the examination of a coarse-grained nanocomposite system and show how different surface energies between particle and polymer lead to different dispersion states, and effect properties such as viscosity, yield strength, elasticity, and thermal conductivity.

  12. A New Model for the Error Detection Delay of Finite Precision Binary Arithmetic Codes with a Forbidden Symbol

    NASA Astrophysics Data System (ADS)

    Pang, Yuye; Sun, Jun; Wang, Jia; Wang, Peng

    In this paper, the statistical characteristic of the Error Detection Delay (EDD) of Finite Precision Binary Arithmetic Codes (FPBAC) is discussed. It is observed that, apart from the probability of the Forbidden Symbol (FS) inserted into the list of the source symbols, the probability of the source sequence and the operation precision as well as the position of the FS in the coding interval can affect the statistical characteristic of the EDD. Experiments demonstrate that the actual distribution of the EDD of FPBAC is quite different from the geometric distribution of infinite precision arithmetic codes. This phenomenon is researched deeply, and a new statistical model (gamma distribution) of the actual distribution of the EDD is proposed, which can make a more precise prediction of the EDD. Finally, the relation expressions between the parameters of gamma distribution and the related factors affecting the distribution are given.

  13. A high burnup model developed for the DIONISIO code

    NASA Astrophysics Data System (ADS)

    Soba, A.; Denis, A.; Romero, L.; Villarino, E.; Sardella, F.

    2013-02-01

    A group of subroutines, designed to extend the application range of the fuel performance code DIONISIO to high burn up, has recently been included in the code. The new calculation tools, which are tuned for UO2 fuels in LWR conditions, predict the radial distribution of power density, burnup, and concentration of diverse nuclides within the pellet. The balance equations of all the isotopes involved in the fission process are solved in a simplified manner, and the one-group effective cross sections of all of them are obtained as functions of the radial position in the pellet, burnup, and enrichment in 235U. In this work, the subroutines are described and the results of the simulations performed with DIONISIO are presented. The good agreement with the data provided in the FUMEX II/III NEA data bank can be easily recognized.

  14. Enhancements to the SSME transfer function modeling code

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.

    1995-01-01

    This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an

  15. A distributed coding approach for stereo sequences in the tree structured Haar transform domain

    NASA Astrophysics Data System (ADS)

    Cancellaro, M.; Carli, M.; Neri, A.

    2009-02-01

    In this contribution, a novel method for distributed video coding for stereo sequences is proposed. The system encodes independently the left and right frames of the stereoscopic sequence. The decoder exploits the side information to achieve the best reconstruction of the correlated video streams. In particular, a syndrome coder approach based on a lifted Tree Structured Haar wavelet scheme has been adopted. The experimental results show the effectiveness of the proposed scheme.

  16. Delayed photo-emission model for beam optics codes

    DOE PAGES

    Jensen, Kevin L.; Petillo, John J.; Panagos, Dimitrios N.; ...

    2016-11-22

    Future advanced light sources and x-ray Free Electron Lasers require fast response from the photocathode to enable short electron pulse durations as well as pulse shaping, and so the ability to model delays in emission is needed for beam optics codes. The development of a time-dependent emission model accounting for delayed photoemission due to transport and scattering is given, and its inclusion in the Particle-in-Cell code MICHELLE results in changes to the pulse shape that are described. Furthermore, the model is applied to pulse elongation of a bunch traversing an rf injector, and to the smoothing of laser jitter onmore » a short pulse.« less

  17. Using cryptology models for protecting PHP source code

    NASA Astrophysics Data System (ADS)

    Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen

    2013-10-01

    Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.

  18. Discovering binary codes for documents by learning deep generative models.

    PubMed

    Hinton, Geoffrey; Salakhutdinov, Ruslan

    2011-01-01

    We describe a deep generative model in which the lowest layer represents the word-count vector of a document and the top layer represents a learned binary code for that document. The top two layers of the generative model form an undirected associative memory and the remaining layers form a belief net with directed, top-down connections. We present efficient learning and inference procedures for this type of generative model and show that it allows more accurate and much faster retrieval than latent semantic analysis. By using our method as a filter for a much slower method called TF-IDF we achieve higher accuracy than TF-IDF alone and save several orders of magnitude in retrieval time. By using short binary codes as addresses, we can perform retrieval on very large document sets in a time that is independent of the size of the document set using only one word of memory to describe each document.

  19. Code development for ITER edge modelling - SOLPS5.1

    NASA Astrophysics Data System (ADS)

    Bonnin, X.; Kukushkin, A. S.; Coster, D. P.

    2009-06-01

    Most ITER divertor modelling work to date used the B2-EIRENE (SOLPS4) code package, coupling a 2D fluid description of the charged plasma species (B2) to a Monte-Carlo kinetic description of the neutrals (EIRENE). In recent years, the emphasis at ITER has been on completing the neutral model, including neutral-neutral collisions, opacity effects, radiation transport, etc. Elsewhere, new physics, numerics, and algorithmic improvements, such as E × B and diamagnetic drifts, electric currents, ion and neutral heat and particle flux limits, wall material mixing and surface temperature evolution, and bundling of heavy ions species, as well as switching to cell-centred velocities and using an internal energy instead of a total energy equation, gave birth to the B2.5 code, combined with EIRENE as SOLPS5. We report on work in progress to merge these advances with the ITER-specific model of the edge and divertor.

  20. Dual coding: a cognitive model for psychoanalytic research.

    PubMed

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code

  1. A spectral synthesis code for rapid modelling of supernovae

    NASA Astrophysics Data System (ADS)

    Kerzendorf, Wolfgang E.; Sim, Stuart A.

    2014-05-01

    We present TARDIS - an open-source code for rapid spectral modelling of supernovae (SNe). Our goal is to develop a tool that is sufficiently fast to allow exploration of the complex parameter spaces of models for SN ejecta. This can be used to analyse the growing number of high-quality SN spectra being obtained by transient surveys. The code uses Monte Carlo methods to obtain a self-consistent description of the plasma state and to compute a synthetic spectrum. It has a modular design to facilitate the implementation of a range of physical approximations that can be compared to assess both accuracy and computational expediency. This will allow users to choose a level of sophistication appropriate for their application. Here, we describe the operation of the code and make comparisons with alternative radiative transfer codes of differing levels of complexity (SYN++, PYTHON and ARTIS). We then explore the consequence of adopting simple prescriptions for the calculation of atomic excitation, focusing on four species of relevance to Type Ia SN spectra - Si II, S II, Mg II and Ca II. We also investigate the influence of three methods for treating line interactions on our synthetic spectra and the need for accurate radiative rate estimates in our scheme.

  2. Transform Coding for Point Clouds Using a Gaussian Process Model.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2017-04-28

    We propose using stationary Gaussian Processes (GPs) to model the statistics of the signal on points in a point cloud, which can be considered samples of a GP at the positions of the points. Further, we propose using Gaussian Process Transforms (GPTs), which are Karhunen-Lo`eve transforms of the GP, as the basis of transform coding of the signal. Focusing on colored 3D point clouds, we propose a transform coder that breaks the point cloud into blocks, transforms the blocks using GPTs, and entropy codes the quantized coefficients. The GPT for each block is derived from both the covariance function of the GP and the locations of the points in the block, which are separately encoded. The covariance function of the GP is parameterized, and its parameters are sent as side information. The quantized coefficients are sorted by eigenvalues of the GPTs, binned, and encoded using an arithmetic coder with bin-dependent Laplacian models whose parameters are also sent as side information. Results indicate that transform coding of 3D point cloud colors using the proposed GPT and entropy coding achieves superior compression performance on most of our data sets.

  3. The modelling of wall condensation with noncondensable gases for the containment codes

    SciTech Connect

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H.

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  4. Water Distribution and Removal Model

    SciTech Connect

    Y. Deng; N. Chipman; E.L. Hardin

    2005-08-26

    The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes

  5. Improvement of Basic Fluid Dynamics Models for the COMPASS Code

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is a new next generation safety analysis code to provide local information for various key phenomena in core disruptive accidents of sodium-cooled fast reactors, which is based on the moving particle semi-implicit (MPS) method. In this study, improvement of basic fluid dynamics models for the COMPASS code was carried out and verified with fundamental verification calculations. A fully implicit pressure solution algorithm was introduced to improve the numerical stability of MPS simulations. With a newly developed free surface model, numerical difficulty caused by poor pressure solutions is overcome by involving free surface particles in the pressure Poisson equation. In addition, applicability of the MPS method to interactions between fluid and multi-solid bodies was investigated in comparison with dam-break experiments with solid balls. It was found that the PISO algorithm and free surface model makes simulation with the passively moving solid model stable numerically. The characteristic behavior of solid balls was successfully reproduced by the present numerical simulations.

  6. SENR, A Super-Efficient Code for Gravitational Wave Source Modeling: Latest Results

    NASA Astrophysics Data System (ADS)

    Ruchlin, Ian; Etienne, Zachariah; Baumgarte, Thomas

    2017-01-01

    The science we extract from gravitational wave observations will be limited by our theoretical understanding, so with the recent breakthroughs by LIGO, reliable gravitational wave source modeling has never been more critical. Due to efficiency considerations, current numerical relativity codes are very limited in their applicability to direct LIGO source modeling, so it is important to develop new strategies for making our codes more efficient. We introduce SENR, a Super-Efficient, open-development numerical relativity (NR) code aimed at improving the efficiency of moving-puncture-based LIGO gravitational wave source modeling by 100x. SENR builds upon recent work, in which the BSSN equations are evolved in static spherical coordinates, to allow dynamical coordinates with arbitrary spatial distributions. The physical domain is mapped to a uniform-resolution grid on which derivative operations are approximated using standard central finite difference stencils. The source code is designed to be human-readable, efficient, parallelized, and readily extensible. We present the latest results from the SENR code.

  7. New Mechanical Model for the Transmutation Fuel Performance Code

    SciTech Connect

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  8. Multi-Code Ab Initio Calculation of Ionization Distributions and Radiation Losses for Tungsten in Tokamak Plasmas

    SciTech Connect

    Ralchenko, Yu.; Abdallah, J. Jr.; Colgan, J.; Fontes, C. J.; Foster, M.; Zhang, H. L.; Bar-Shalom, A.; Oreg, J.; Bauche, J.; Bauche-Arnoult, C.; Bowen, C.; Faussurier, G.; Chung, H.-K.; Hansen, S. B.; Lee, R. W.; Scott, H.; Gaufridy de Dortan, F. de; Poirier, M.; Golovkin, I.; Novikov, V.

    2009-09-10

    We present calculations of ionization balance and radiative power losses for tungsten in magnetic fusion plasmas. The simulations were performed within the framework of Non-Local Thermodynamic Equilibrium (NLTE) Code Comparison Workshops utilizing several independent collisional-radiative models. The calculations generally agree with each other; however, a clear disagreement with experimental ionization distributions at low temperatures 2 keV

  9. Development of a model and computer code to describe solar grade silicon production processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.

  10. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.

  11. Exact energy conservation in hybrid meshless model/code

    NASA Astrophysics Data System (ADS)

    Galkin, Sergei A.

    2008-11-01

    Energy conservation is an important issue for both PIC and hybrid models. In hybrid codes the ions are treated kinetically and the electrons are described as a massless charge-neutralizing fluid. Our recently developed Particle-In-Cloud-Of-Points (PICOP) approach [1], which uses an adaptive meshless technique to compute electromagnetic fields on a cloud of computational points, is applied to a hybrid model. An exact energy conservation numerical scheme, which describes the interaction between geometrical space, where the electromagnetic fields are computed, and particle/velocity space, is presented. Having being utilized in a new PICOP hybrid code, the algorithm had demonstrated accurate energy conservation in the numerical simulation of two counter streaming plasma beams instability. [1] S. A. Galkin, B. P. Cluggish, J. S. Kim, S. Yu. Medvedev ``Advansed PICOP Algorithm with Adaptive Meshless Field Solver'', Published in the IEEE PPPS/ICOP 2007 Conference proceedings, pp. 1445-1448, Albuquerque, New Mexico, June 17-22, 2007.

  12. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  13. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, D P; Friedman, A; Vay, J L; Haber, I

    2004-12-09

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{_}summary.html.

  14. The WARP Code: Modeling High Intensity Ion Beams

    SciTech Connect

    Grote, David P.; Friedman, Alex; Vay, Jean-Luc; Haber, Irving

    2005-03-15

    The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse 'slice' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{sub s}ummary.html.

  15. Time-dependent recycling modeling with edge plasma transport codes

    NASA Astrophysics Data System (ADS)

    Pigarov, A.; Krasheninnikov, S.; Rognlien, T.; Taverniers, S.; Hollmann, E.

    2013-10-01

    First,we discuss extensions to Macroblob approach which allow to simulate more accurately dynamics of ELMs, pedestal and edge transport with UEDGE code. Second,we present UEDGE modeling results for H mode discharge with infrequent ELMs and large pedestal losses on DIII-D. In modeled sequence of ELMs this discharge attains a dynamic equilibrium. Temporal evolution of pedestal plasma profiles, spectral line emission, and surface temperature matching experimental data over ELM cycle is discussed. Analysis of dynamic gas balance highlights important role of material surfaces. We quantified the wall outgassing between ELMs as 3X the NBI fueling and the recycling coefficient as 0.8 for wall pumping via macroblob-wall interactions. Third,we also present results from multiphysics version of UEDGE with built-in, reduced, 1-D wall models and analyze the role of various PMI processes. Progress in framework-coupled UEDGE/WALLPSI code is discussed. Finally, implicit coupling schemes are important feature of multiphysics codes and we report on the results of parametric analysis of convergence and performance for Picard and Newton iterations in a system of coupled deterministic-stochastic ODE and proposed modifications enhancing convergence.

  16. Implementing Subduction Models in the New Mantle Convection Code Aspect

    NASA Astrophysics Data System (ADS)

    Arredondo, Katrina; Billen, Magali

    2014-05-01

    The geodynamic community has utilized various numerical modeling codes as scientific questions arise and computer processing power increases. Citcom, a widely used mantle convection code, has limitations and vulnerabilities such as temperature overshoots of hundreds or thousands degrees Kelvin (i.e., Kommu et al., 2013). Recently Aspect intended as a more powerful cousin, is in active development with additions such as Adaptable Mesh Refinement (AMR) and improved solvers (Kronbichler et al., 2012). The validity and ease of use of Aspect is important to its survival and role as a possible upgrade and replacement to Citcom. Development of publishable models illustrates the capacity of Aspect. We present work on the addition of non-linear solvers and stress-dependent rheology to Aspect. With a solid foundational knowledge of C++, these additions were easily added into Aspect and tested against CitcomS. Time-dependent subduction models akin to those in Billen and Hirth (2007) are built and compared in CitcomS and Aspect. Comparison with CitcomS assists in Aspect development and showcases its flexibility, usability and capabilities. References: Billen, M. I., and G. Hirth, 2007. Rheologic controls on slab dynamics. Geochemistry, Geophysics, Geosystems. Kommu, R., E. Heien, L. H. Kellogg, W. Bangerth, T. Heister, E. Studley, 2013. The Overshoot Phenomenon in Geodynamics Codes. American Geophysical Union Fall Meeting. M. Kronbichler, T. Heister, W. Bangerth, 2012, High Accuracy Mantle Convection Simulation through Modern Numerical Methods, Geophys. J. Int.

  17. Current Capabilities of the Fuel Performance Modeling Code PARFUME

    SciTech Connect

    G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson

    2004-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.

  18. Film grain noise modeling in advanced video coding

    NASA Astrophysics Data System (ADS)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  19. An Interoceptive Predictive Coding Model of Conscious Presence

    PubMed Central

    Seth, Anil K.; Suzuki, Keisuke; Critchley, Hugo D.

    2011-01-01

    We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness. PMID:22291673

  20. Spatial correlation-based side information refinement for distributed video coding

    NASA Astrophysics Data System (ADS)

    Taieb, Mohamed Haj; Chouinard, Jean-Yves; Wang, Demin

    2013-12-01

    Distributed video coding (DVC) architecture designs, based on distributed source coding principles, have benefitted from significant progresses lately, notably in terms of achievable rate-distortion performances. However, a significant performance gap still remains when compared to prediction-based video coding schemes such as H.264/AVC. This is mainly due to the non-ideal exploitation of the video sequence temporal correlation properties during the generation of side information (SI). In fact, the decoder side motion estimation provides only an approximation of the true motion. In this paper, a progressive DVC architecture is proposed, which exploits the spatial correlation of the video frames to improve the motion-compensated temporal interpolation (MCTI). Specifically, Wyner-Ziv (WZ) frames are divided into several spatially correlated groups that are then sent progressively to the receiver. SI refinement (SIR) is performed as long as these groups are being decoded, thus providing more accurate SI for the next groups. It is shown that the proposed progressive SIR method leads to significant improvements over the Discover DVC codec as well as other SIR schemes recently introduced in the literature.

  1. ETRANS: an energy transport system optimization code for distributed networks of solar collectors

    SciTech Connect

    Barnhart, J.S.

    1980-09-01

    The optimization code ETRANS was developed at the Pacific Northwest Laboratory to design and estimate the costs associated with energy transport systems for distributed fields of solar collectors. The code uses frequently cited layouts for dish and trough collectors and optimizes them on a section-by-section basis. The optimal section design is that combination of pipe diameter and insulation thickness that yields the minimum annualized system-resultant cost. Among the quantities included in the costing algorithm are (1) labor and materials costs associated with initial plant construction, (2) operating expenses due to daytime and nighttime heat losses, and (3) operating expenses due to pumping power requirements. Two preliminary series of simulations were conducted to exercise the code. The results indicate that transport system costs for both dish and trough collector fields increase with field size and receiver exit temperature. Furthermore, dish collector transport systems were found to be much more expensive to build and operate than trough transport systems. ETRANS itself is stable and fast-running and shows promise of being a highly effective tool for the analysis of distributed solar thermal systems.

  2. Complementarity between entanglement-assisted and quantum distributed random access code

    NASA Astrophysics Data System (ADS)

    Hameedi, Alley; Saha, Debashis; Mironowicz, Piotr; Pawłowski, Marcin; Bourennane, Mohamed

    2017-05-01

    Collaborative communication tasks such as random access codes (RACs) employing quantum resources have manifested great potential in enhancing information processing capabilities beyond the classical limitations. The two quantum variants of RACs, namely, quantum random access code (QRAC) and the entanglement-assisted random access code (EARAC), have demonstrated equal prowess for a number of tasks. However, there do exist specific cases where one outperforms the other. In this article, we study a family of 3 →1 distributed RACs [J. Bowles, N. Brunner, and M. Pawłowski, Phys. Rev. A 92, 022351 (2015), 10.1103/PhysRevA.92.022351] and present its general construction of both the QRAC and the EARAC. We demonstrate that, depending on the function of inputs that is sought, if QRAC achieves the maximal success probability then EARAC fails to do so and vice versa. Moreover, a tripartite Bell-type inequality associated with the EARAC variants reveals the genuine multipartite nonlocality exhibited by our protocol. We conclude with an experimental realization of the 3 →1 distributed QRAC that achieves higher success probabilities than the maximum possible with EARACs for a number of tasks.

  3. Power Allocation Strategies for Distributed Space-Time Codes in Amplify-and-Forward Mode

    NASA Astrophysics Data System (ADS)

    Maham, Behrouz; Hjørungnes, Are

    2009-12-01

    We consider a wireless relay network with Rayleigh fading channels and apply distributed space-time coding (DSTC) in amplify-and-forward (AF) mode. It is assumed that the relays have statistical channel state information (CSI) of the local source-relay channels, while the destination has full instantaneous CSI of the channels. It turns out that, combined with the minimum SNR based power allocation in the relays, AF DSTC results in a new opportunistic relaying scheme, in which the best relay is selected to retransmit the source's signal. Furthermore, we have derived the optimum power allocation between two cooperative transmission phases by maximizing the average received SNR at the destination. Next, assuming M-PSK and M-QAM modulations, we analyze the performance of cooperative diversity wireless networks using AF opportunistic relaying. We also derive an approximate formula for the symbol error rate (SER) of AF DSTC. Assuming the use of full-diversity space-time codes, we derive two power allocation strategies minimizing the approximate SER expressions, for constrained transmit power. Our analytical results have been confirmed by simulation results, using full-rate, full-diversity distributed space-time codes.

  4. Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter

    2015-09-01

    Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.

  5. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    PubMed

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Development of Parallel Code for the Alaska Tsunami Forecast Model

    NASA Astrophysics Data System (ADS)

    Bahng, B.; Knight, W. R.; Whitmore, P.

    2014-12-01

    The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems.

  7. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam; Sundararaghavan, Veera

    2015-06-01

    In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.

  8. Direct containment heating models in the CONTAIN code

    SciTech Connect

    Washington, K.E.; Williams, D.C.

    1995-08-01

    The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.

  9. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  10. Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes

    NASA Astrophysics Data System (ADS)

    Tavallaei, Saeed Ebadi; Falahati, Abolfazl

    Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.

  11. Inferential multi-spectral image compression based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  12. High-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Orgun, Mehmet A.; Pieprzyk, Josef; Li, Jing; Luo, Mingxing; Xiao, Jinghua; Xiao, Fuyuan

    2016-11-01

    We propose an approach that achieves high-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding. In particular, we encode a key with the Chebyshev-map values corresponding to Lucas numbers and then use k-Chebyshev maps to achieve consecutive and flexible key expansion and apply the pre-shared classical information between Alice and Bob and fountain codes for privacy amplification to solve the security of the exchange of classical information via the classical channel. Consequently, our high-capacity protocol does not have the limitations imposed by orbital angular momentum and down-conversion bandwidths, and it meets the requirements for longer distances and lower error rates simultaneously.

  13. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  14. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  15. Modelling of LOCA Tests with the BISON Fuel Performance Code

    SciTech Connect

    Williamson, Richard L; Pastore, Giovanni; Novascone, Stephen Rhead; Spencer, Benjamin Whiting; Hales, Jason Dean

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  16. LineCast: line-based distributed coding and transmission for broadcasting satellite images.

    PubMed

    Wu, Feng; Peng, Xiulian; Xu, Jizheng

    2014-03-01

    In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.

  17. An algebraic model of an associative noise-like coding memory.

    PubMed

    Bottini, S

    1980-01-01

    A mathematical model of an associative memory is presented, sharing with the optical holography memory systems the properties which establish an analogy with biological memory. This memory system--developed from Gabor's model of memory--is based on a noise-like coding of the information by which it realizes a distributed, damage-tolerant, "equipotential" storage through simultaneous state changes of discrete substratum elements. Each two associated items being stored are coded by each other by means of two noise-like patterns obtained from them through a randomizing preprocessing. The algebraic transformations operating the information storage and retrieval are matrix-vector products involving Toeplitz type matrices. Several noise-like coded memory traces are superimposed on a common substratum without crosstalk interference; moreover, extraneous noise added to these memory traces does not injure the stored information. The main performances shown by this memory model are: i) the selective, complete recovering of stored information from incomplete keys, both mixed with extraneous information and translated from the position learnt; ii) a dynamic recollection where the information just recovered acts as a new key for a sequential retrieval process; iii) context-dependent responses. The hypothesis that the information is stored in the nervous system through a noise-like coding is suggested. The model has been simulated on a digital computer using bidimensional images.

  18. 24 CFR 200.925c - Model codes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Workman Mill Road, Whittier, California 90601. (2) National Electrical Code, NFPA 70, 1993 Edition... Building Officials and Code Administrators International, Inc., 4051 West Flossmoor Road, Country Club... the Southern Building Code Congress International, Inc., 900 Montclair Road, Birmingham, Alabama 35213...

  19. Hierarchical model for distributed seismicity

    SciTech Connect

    Tejedor, Alejandro; Gomez, Javier B.; Pacheco, Amalio F.

    2010-07-15

    A cellular automata model for the interaction between seismic faults in an extended region is presented. Faults are represented by boxes formed by a different number of sites and located in the nodes of a fractal tree. Both the distribution of box sizes and the interaction between them is assumed to be hierarchical. Load particles are randomly added to the system, simulating the action of external tectonic forces. These particles fill the sites of the boxes progressively. When a box is full it topples, some of the particles are redistributed to other boxes and some of them are lost. A box relaxation simulates the occurrence of an earthquake in the region. The particle redistributions mostly occur upwards (to larger faults) and downwards (to smaller faults) in the hierarchy producing new relaxations. A simple and efficient bookkeeping of the information allows the running of systems with more than fifty million faults. This model is consistent with the definition of magnitude, i.e., earthquakes of magnitude m take place in boxes with a number of sites ten times bigger than those boxes responsible for earthquakes with a magnitude m-1 which are placed in the immediate lower level of the hierarchy. The three parameters of the model have a geometrical nature: the height or number of levels of the fractal tree, the coordination of the tree and the ratio of areas between boxes in two consecutive levels. Besides reproducing several seismicity properties and regularities, this model is used to test the performance of some precursory patterns.

  20. Finite element code development for modeling detonation of HMX composites

    NASA Astrophysics Data System (ADS)

    Duran, Adam V.; Sundararaghavan, Veera

    2017-01-01

    In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.

  1. Description of the FORTRAN implementation of the spring small grains planting date distribution model

    NASA Technical Reports Server (NTRS)

    Artley, J. A. (Principal Investigator)

    1981-01-01

    The Hodges-Artley spring small grains planting date distribution model was coded in FORTRAN. The PLDRVR program, which implements the model, is described and a copy of the code is provided. The purpose, calling procedure, local variables, and input/output devices for each subroutine are explained to supplement the user's guide.

  2. Interim storage of spent and disused sealed sources: optimisation of external dose distribution in waste grids using the MCNPX code.

    PubMed

    Paiva, I; Oliveira, C; Trindade, R; Portugal, L

    2005-01-01

    Radioactive sealed sources are in use worldwide in different fields of application. When no further use is foreseen for these sources, they become spent or disused sealed sources and are subject to a specific waste management scheme. Portugal does have a Radioactive Waste Interim Storage Facility where spent or disused sealed sources are conditioned in a cement matrix inside concrete drums and following the geometrical disposition of a grid. The gamma dose values around each grid depend on the drum's enclosed activity and radionuclides considered, as well as on the drums distribution in the various layers of the grid. This work proposes a method based on the Monte Carlo simulation using the MCNPX code to estimate the best drum arrangement through the optimisation of dose distribution in a grid. Measured dose rate values at 1 m distance from the surface of the chosen optimised grid were used to validate the corresponding computational grid model.

  3. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    SciTech Connect

    Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  4. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections

    PubMed Central

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-01-01

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model

  5. Mitigating BeiDou Satellite-Induced Code Bias: Taking into Account the Stochastic Model of Corrections.

    PubMed

    Guo, Fei; Li, Xin; Liu, Wanke

    2016-06-18

    The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model.

  6. A model code for the radiative theta pinch

    SciTech Connect

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-15

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  7. A model code for the radiative theta pinch

    NASA Astrophysics Data System (ADS)

    Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.

    2014-07-01

    A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor fm. These values are taken from experiments carried out in the Chulalongkorn theta pinch.

  8. Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes

    SciTech Connect

    Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.

    2002-07-01

    A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)

  9. MMA, A Computer Code for Multi-Model Analysis

    SciTech Connect

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  10. Auditory information coding by modeled cochlear nucleus neurons.

    PubMed

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  11. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  12. Physics models in the toroidal transport code PROCTR

    SciTech Connect

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  13. Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding.

    PubMed

    Dai, Wenrui; Xiong, Hongkai; Wang, Jia; Zheng, Yuan F

    2014-02-01

    Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.

  14. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  15. EMPIRE: Nuclear Reaction Model Code System for Data Evaluation

    NASA Astrophysics Data System (ADS)

    Herman, M.; Capote, R.; Carlson, B. V.; Obložinský, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.

    2007-12-01

    EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions (∽ keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with γ-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and γ-ray strength functions. The results can be converted into ENDF-6 formatted files using the

  16. Code-to-code benchmark tests for 3D simulation models dedicated to the extraction region in negative ion sources

    NASA Astrophysics Data System (ADS)

    Nishioka, S.; Mochalskyy, S.; Taccogna, F.; Hatayama, A.; Fantz, U.; Minelli, P.

    2017-08-01

    The development of the kinetic particle model for the extraction region in negative hydrogen ion sources is indispensable and helpful to clarify the H- beam extraction physics. Recently, various 3D kinetic particle codes have been developed to study the extraction mechanism. Direct comparison between each other has not yet been done. Therefore, we have carried out a code-to-code benchmark activity to validate our codes. In the present study, the progress in this benchmark activity is summarized. At present, the reasonable agreement with the result by each code have been obtained using realistic plasma parameters at least for the following items; (1) Potential profile in the case of the vacuum condition (2) Temporal evolution of extracted current densities and profiles of electric potential in the case of the plasma consisting of only electrons and positive ions.

  17. A novel method involving Matlab coding to determine the distribution of a collimated ionizing radiation beam

    NASA Astrophysics Data System (ADS)

    Ioan, M.-R.

    2016-08-01

    In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.

  18. Acoustic Gravity Wave Chemistry Model for the RAYTRACE Code.

    DTIC Science & Technology

    2014-09-26

    AU)-AI56 850 ACOlUSTIC GRAVITY WAVE CHEMISTRY MODEL FOR THE IAYTRACE I/~ CODE(U) MISSION RESEARCH CORP SANTA BARBIARA CA T E OLD Of MAN 84 MC-N-SlS...DNA-TN-S4-127 ONAOOI-BO-C-0022 UNLSSIFIlED F/O 20/14 NL 1-0 2-8 1111 po 312.2 1--I 11111* i •. AD-A 156 850 DNA-TR-84-127 ACOUSTIC GRAVITY WAVE...Hicih Frequency Radio Propaoation Acoustic Gravity Waves 20. ABSTRACT (Continue en reveree mide if tteceeemr and Identify by block number) This

  19. EMPIRE: A Reaction Model Code for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Palumbo, A.; Herman, M.; Capote, R.

    2014-06-01

    The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.

  20. The Overlap Model: A Model of Letter Position Coding

    ERIC Educational Resources Information Center

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…

  1. The Overlap Model: A Model of Letter Position Coding

    ERIC Educational Resources Information Center

    Gomez, Pablo; Ratcliff, Roger; Perea, Manuel

    2008-01-01

    Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…

  2. Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Cucinotta, Francis A.

    2010-01-01

    The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their

  3. Distribution of SR protein exonic splicing enhancer motifs in human protein-coding genes.

    PubMed

    Wang, Jinhua; Smith, Philip J; Krainer, Adrian R; Zhang, Michael Q

    2005-01-01

    Exonic splicing enhancers (ESEs) are pre-mRNA cis-acting elements required for splice-site recognition. We previously developed a web-based program called ESEfinder that scores any sequence for the presence of ESE motifs recognized by the human SR proteins SF2/ASF, SRp40, SRp55 and SC35 (http://rulai.cshl.edu/tools/ESE/). Using ESEfinder, we have undertaken a large-scale analysis of ESE motif distribution in human protein-coding genes. Significantly higher frequencies of ESE motifs were observed in constitutive internal protein-coding exons, compared with both their flanking intronic regions and with pseudo exons. Statistical analysis of ESE motif frequency distributions revealed a complex relationship between splice-site strength and increased or decreased frequencies of particular SR protein motifs. Comparison of constitutively and alternatively spliced exons demonstrated slightly weaker splice-site scores, as well as significantly fewer ESE motifs, in the alternatively spliced group. Our results underline the importance of ESE-mediated SR protein function in the process of exon definition, in the context of both constitutive splicing and regulated alternative splicing.

  4. Prioritized Degree Distribution in Wireless Sensor Networks with a Network Coded Data Collection Method

    PubMed Central

    Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng

    2012-01-01

    The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451

  5. Integrated Codes Model for Erosion-Deposition in Long Discharges

    SciTech Connect

    Hogan, John T

    2006-08-01

    There is increasing interest in understanding the mechanisms causing the deuterium retention rates which are observed in the longest high power tokamak discharges, and its possible relation to near term choices which must be made for plasma-facing components in next generation devices [1]. Both co-deposition and bulk diffusion models are regarded as potentially relevant. This contribution describes a global model for the co-depositio axis of this dilemma, which includes as many of the relevant processes which may contribute to it as is computationally feasible, following the 'maximal ordering / minimal simplification' strategy described in Kruskal's "Asymptotology" [2]. The global model is interpretative, meaning that some key information describing the bulk plasma is provided by experimental measurement, and the models for the impurity processes relevant to retention, given this measured background, are simulated and compared with other data. In particular, the model describes the carbon balance in near steady-state systems, to be able to understand the relation between retention in present devices and the level which might be expected in fusion reactors, or precursor experiments such as ITER. The key modules of the global system describe impurity generation, their transport in and through the SOL, and core impurity transport. The codes IMPFLU, BBQ, and ITC/MIST, in order of the appearance of the processes they describe, are used to calculate the balance: IMPFLU is an adaptation of the TOKAFLU module of CAST3M [3], developed by CEA, which is a 3-D, time-dependent finite elements code which determines the thermal and mechanical properties of plasma-facing components. BBQ [4, 5] is a Monte Carlo guiding center code which describes trace impurity transport in a 3-D defined-plasma background, to calculate observables (line emission) for comparison with spectroscopy. ITC [6] and MIST [7] are radial core multi-species impurity transport codes. The modules are linked

  6. Comparison of different methods used in integral codes to model coagulation of aerosols

    NASA Astrophysics Data System (ADS)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  7. Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes

    NASA Technical Reports Server (NTRS)

    Srivastava, R.; Gould, R. K.

    1979-01-01

    The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.

  8. A numerical code for a three-dimensional magnetospheric MHD equilibrium model

    NASA Technical Reports Server (NTRS)

    Voigt, G.-H.

    1992-01-01

    Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.

  9. 7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 12 2011-01-01 2011-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...

  10. A simple model of optimal population coding for sensory systems.

    PubMed

    Doi, Eizaburo; Lewicki, Michael S

    2014-08-01

    A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.

  11. Kinetic models of gene expression including non-coding RNAs

    NASA Astrophysics Data System (ADS)

    Zhdanov, Vladimir P.

    2011-03-01

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  12. Mixing models for the two-way-coupling of CFD codes and zero-dimensional multi-zone codes to model HCCI combustion

    SciTech Connect

    Barths, H.; Felsch, C.; Peters, N.

    2009-01-15

    The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)

  13. Mixing models for the two-way-coupling of CFD codes and zero-dimensional multi-zone codes to model HCCI combustion

    SciTech Connect

    Barths, H.; Felsch, C.; Peters, N.

    2008-11-15

    The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)

  14. New high burnup fuel models for NRC`s licensing audit code, FRAPCON

    SciTech Connect

    Lanning, D.D.; Beyer, C.E.; Painter, C.L.

    1996-03-01

    Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data.

  15. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  16. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it

  17. mGrid: a load-balanced distributed computing environment for the remote execution of the user-defined Matlab code.

    PubMed

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-03-15

    Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over

  18. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  19. Secondary neutron source modelling using MCNPX and ALEPH codes

    NASA Astrophysics Data System (ADS)

    Trakas, Christos; Kerkar, Nordine

    2014-06-01

    Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.

  20. Debris Flow Distributed Propagation Model

    NASA Astrophysics Data System (ADS)

    Gregoretti, C.

    The debris flow distributed propagation model is a DEM-based model. The fan is dis- cretized by square cells and each cell is assigned an altitude on the sea level. The cells of the catchment are distinguished in two categories: the source cells and the stripe cells. The source cells receive the input hydograph: the cells close to the torrent which are flooded by the debris flow overflowing the torrent embankment are source cells. The stripes cells are the cells flooded by debris flow coming from the surrounding cells. At the first time step only the source cells are flooded by debris flow coming from the torrent. At the second time step a certain number of cells are flooded by de- bris flow coming from the source cells. These cells constitute a stripe of cells and are assigned order two. At the third time step another group of cells are flooded by the debris flow coming from the cells whose order is two. These cells constitute another stripe and are assigned order three. The cell order of a stripe is the time step number corresponding to the transition from dry to flooded state. The mass transfer or mo- mentum exchange between cells is governed by two different mechanisms. The mass transfer is allowed only by a positive or equal to zero flow level difference between the drained cell and the receiving cell. The mass transfer is limited by a not negative final flow level difference between the drained cell and the receiving cells. This limitation excludes the case of possible oscillations in the mass transfer. Another limitation is that the mass drained by a cell should be less than the available mass in that cell. This last condition provides the respect of mass conservation. The first mechanism of mass transfer is the gravity. The mass in a cell is transferred to the neighbouring cells with lower altitude and flow level according to an uniform flow law: The second mecha- nism of mass transfer is the broad crested weir. The mass in a cell is transferred to the

  1. Can mechanism inform species' distribution models?

    PubMed

    Buckley, Lauren B; Urban, Mark C; Angilletta, Michael J; Crozier, Lisa G; Rissler, Leslie J; Sears, Michael W

    2010-08-01

    Two major approaches address the need to predict species distributions in response to environmental changes. Correlative models estimate parameters phenomenologically by relating current distributions to environmental conditions. By contrast, mechanistic models incorporate explicit relationships between environmental conditions and organismal performance, estimated independently of current distributions. Mechanistic approaches include models that translate environmental conditions into biologically relevant metrics (e.g. potential duration of activity), models that capture environmental sensitivities of survivorship and fecundity, and models that use energetics to link environmental conditions and demography. We compared how two correlative and three mechanistic models predicted the ranges of two species: a skipper butterfly (Atalopedes campestris) and a fence lizard (Sceloporus undulatus). Correlative and mechanistic models performed similarly in predicting current distributions, but mechanistic models predicted larger range shifts in response to climate change. Although mechanistic models theoretically should provide more accurate distribution predictions, there is much potential for improving their flexibility and performance.

  2. Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis.

    PubMed

    Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2011-06-14

    Face individuation is one of the most impressive achievements of our visual system, and yet uncovering the neural mechanisms subserving this feat appears to elude traditional approaches to functional brain data analysis. The present study investigates the neural code of facial identity perception with the aim of ascertaining its distributed nature and informational basis. To this end, we use a sequence of multivariate pattern analyses applied to functional magnetic resonance imaging (fMRI) data. First, we combine information-based brain mapping and dynamic discrimination analysis to locate spatiotemporal patterns that support face classification at the individual level. This analysis reveals a network of fusiform and anterior temporal areas that carry information about facial identity and provides evidence that the fusiform face area responds with distinct patterns of activation to different face identities. Second, we assess the information structure of the network using recursive feature elimination. We find that diagnostic information is distributed evenly among anterior regions of the mapped network and that a right anterior region of the fusiform gyrus plays a central role within the information network mediating face individuation. These findings serve to map out and characterize a cortical system responsible for individuation. More generally, in the context of functionally defined networks, they provide an account of distributed processing grounded in information-based architectures.

  3. Modeling Constituent Redistribution in U-Pu-Zr Metallic Fuel Using the Advanced Fuel Performance Code BISON

    SciTech Connect

    Douglas Porter; Steve Hayes; Various

    2014-06-01

    The Advanced Fuels Campaign (AFC) metallic fuels currently being tested have higher zirconium and plutonium concentrations than those tested in the past in EBR reactors. Current metal fuel performance codes have limitations and deficiencies in predicting AFC fuel performance, particularly in the modeling of constituent distribution. No fully validated code exists due to sparse data and unknown modeling parameters. Our primary objective is to develop an initial analysis tool by incorporating state-of-the-art knowledge, constitutive models and properties of AFC metal fuels into the MOOSE/BISON (1) framework in order to analyze AFC metallic fuel tests.

  4. CODE's new solar radiation pressure model for GNSS orbit determination

    NASA Astrophysics Data System (ADS)

    Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.

    2015-08-01

    The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which

  5. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2010-04-01 2010-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF...

  6. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2011-04-01 2011-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF...

  7. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2014-04-01 2014-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN...

  8. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2012-04-01 2011-04-01 true May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN...

  9. 25 CFR 18.104 - May a tribe include provisions in its tribal probate code regarding the distribution and descent...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... trust personalty? No. All trust personalty will be distributed in accordance with the American Indian... 25 Indians 1 2013-04-01 2013-04-01 false May a tribe include provisions in its tribal probate code regarding the distribution and descent of trust personalty? 18.104 Section 18.104 Indians BUREAU OF INDIAN...

  10. A MODEL BUILDING CODE ARTICLE ON FALLOUT SHELTERS WITH RECOMMENDATIONS FOR INCLUSION OF REQUIREMENTS FOR FALLOUT SHELTER CONSTRUCTION IN FOUR NATIONAL MODEL BUILDING CODES.

    ERIC Educational Resources Information Center

    American Inst. of Architects, Washington, DC.

    A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…

  11. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding

    PubMed Central

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550

  12. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding.

    PubMed

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-09

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust.

  13. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding

    NASA Astrophysics Data System (ADS)

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust.

  14. The distribution and mutagenesis of short coding INDELs from 1,128 whole exomes.

    PubMed

    Challis, Danny; Antunes, Lilian; Garrison, Erik; Banks, Eric; Evani, Uday S; Muzny, Donna; Poplin, Ryan; Gibbs, Richard A; Marth, Gabor; Yu, Fuli

    2015-02-28

    Identifying insertion/deletion polymorphisms (INDELs) with high confidence has been intrinsically challenging in short-read sequencing data. Here we report our approach for improving INDEL calling accuracy by using a machine learning algorithm to combine call sets generated with three independent methods, and by leveraging the strengths of each individual pipeline. Utilizing this approach, we generated a consensus exome INDEL call set from a large dataset generated by the 1000 Genomes Project (1000G), maximizing both the sensitivity and the specificity of the calls. This consensus exome INDEL call set features 7,210 INDELs, from 1,128 individuals across 13 populations included in the 1000 Genomes Phase 1 dataset, with a false discovery rate (FDR) of about 7.0%. In our study we further characterize the patterns and distributions of these exonic INDELs with respect to density, allele length, and site frequency spectrum, as well as the potential mutagenic mechanisms of coding INDELs in humans.

  15. Photoplus: auxiliary information for printed images based on distributed source coding

    NASA Astrophysics Data System (ADS)

    Samadani, Ramin; Mukherjee, Debargha

    2008-01-01

    A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.

  16. Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.

    1983-01-01

    Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.

  17. Calculation of the pressure distribution on a pitching airfoil with application to the Darrieus Rotor. [Computer code DARIUS

    SciTech Connect

    Ghodoosian, N.

    1984-05-01

    An analytical model leading to the pressure distribution on the cross section of a Darrieus Rotor Blade (airfoil) has veen constructed. The model is based on the inviscid flow theory and the contribution of the nonsteady wake vortices was neglected. The analytical model was translated into a computer code in order to study a variety of boundary conditions encountered by the rotating blades of the Darrieus Rotor. Results indicate that, for a pitching airfoil, lift can be adequately approximated by the Kutta-Joukowski forces, despite notable deviations in the pressure distribution on the airfoil. These deviations are most significant at the upwind half of the Darrieus Rotor where higher life is accompanied by increased adverse pressure gradients. The effect of pitching on lift can be approximated by a linear shift in the angle of attack proportional to the blade angular velocity. Tabulation of the fluid velocity about the pitching-only NACA 0015 allowed the principle of superposition to be used to determine the fluid velocity about a translating and pitching airfoil.

  18. Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.

    SciTech Connect

    Blann, Marshall

    2014-11-01

    Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometry Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.

  19. Modeling Vortex Generators in a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2011-01-01

    A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.

  20. The Local Planner's Role Under the Proposed Model Land Development Code

    ERIC Educational Resources Information Center

    Bosselman, Fred P.

    1975-01-01

    The American Law Institute's Proposed Model Land Development Code would revise basic enabling legislation for local land development planning. The code would contain guidelines for local plans that would include both long-range and short-range elements. (Author)

  1. Modeling of Flow Blockage in a Liquid Metal-Cooled Reactor Subassembly with a Subchannel Analysis Code

    SciTech Connect

    Jeong, Hae-Yong; Ha, Kwi-Seok; Chang, Won-Pyo; Kwon, Young-Min; Lee, Yong-Bum

    2005-01-15

    The local blockage in a subassembly of a liquid metal-cooled reactor (LMR) is of importance to the plant safety because of the compact design and the high power density of the core. To analyze the thermal-hydraulic parameters in a subassembly of a liquid metal-cooled reactor with a flow blockage, the Korea Atomic Energy Research Institute has developed the MATRA-LMR-FB code. This code uses the distributed resistance model to describe the sweeping flow formed by the wire wrap around the fuel rods and to model the recirculation flow after a blockage. The hybrid difference scheme is also adopted for the description of the convective terms in the recirculating wake region of low velocity. Some state-of-the-art turbulent mixing models were implemented in the code, and the models suggested by Rehme and by Zhukov are analyzed and found to be appropriate for the description of the flow blockage in an LMR subassembly. The MATRA-LMR-FB code predicts accurately the experimental data of the Oak Ridge National Laboratory 19-pin bundle with a blockage for both the high-flow and low-flow conditions. The influences of the distributed resistance model, the hybrid difference method, and the turbulent mixing models are evaluated step by step with the experimental data. The appropriateness of the models also has been evaluated through a comparison with the results from the COMMIX code calculation. The flow blockage for the KALIMER design has been analyzed with the MATRA-LMR-FB code and is compared with the SABRE code to guarantee the design safety for the flow blockage.

  2. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  3. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  4. Lyo code generator: A model-based code generator for the development of OSLC-compliant tool interfaces

    NASA Astrophysics Data System (ADS)

    El-khoury, Jad

    To promote the newly emerging OSLC (Open Services for Lifecycle Collaboration) tool interoperability standard, an open source code generator is developed that allows for the specification of OSLC-compliant tool interfaces, and from which almost complete Java code of the interface can be generated. The software takes a model-based development approach to tool interoperability, with the aim of providing modeling support for the complete development cycle of a tool interface. The software targets both OSLC developers, as well as the interoperability research community, with proven capabilities to be extended to support their corresponding needs.

  5. Strontium Adsorption and Desorption Reactions in Model Drinking Water Distribution Systems

    DTIC Science & Technology

    2014-02-04

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 11-04-2014 Journal Article Strontium adsorption and desorption reactions in model... strontium (Sr2+) adsorption to and desorption from iron corrosion products were examined in two model drinking water distribution systems (DWDS...used to control Sr2; desorption. calcium carbonate; drinking water distribution system; α-FeOOH; iron; strontium ; XANES Unclassified

  6. The Modeling of Boattail Intrusion in a Lumped Parameter Interior Ballistic Code

    DTIC Science & Technology

    1993-08-01

    AD-A270 702 ARmy RESFARCH LABORATORY The Modeling of Boattail Intrusion in a Lumped Parameter Interior Ballistic Code Frederick W. Robbins Robert T...Puhalla Taquan S. Stewart ARL-TR-181 August 1993 xcf "APROVED MOR PUBC RUEASE; DISTRIBUTION IS UNLIMITED. 93-23103 93 10 1 2 34 A NOTICES Destroy this...to"ration Ooe~attorr% o;.z. Rnc-D" l"’ ’ etier~oin •)av• H*qt •. Ste 1204. Arhngton,. / A 22202-4302 and to the O ,ffie eof Marremerit ao Budget. Pa

  7. Simulation of charge breeding of rubidium using Monte Carlo charge breeding code and generalized ECRIS model

    SciTech Connect

    Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.

    2010-02-15

    A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.

  8. Atomic processes modeling of X-ray free electron laser produced plasmas using SCFLY code

    NASA Astrophysics Data System (ADS)

    Chung, H.-K.; Cho, B. I.; Ciricosta, O.; Vinko, S. M.; Wark, J. S.; Lee, R. W.

    2017-03-01

    With the development of X-ray free electron lasers (XFEL), a novel state of matter of highly transient and non-equilibrium plasma has been created in laboratories. As high intensity X-ray laser beams interact with a solid density target, electrons are ionized from inner-shell orbitals and these electrons and XFEL photons create dense and finite temperature plasmas. In order to study atomic processes in XFEL driven plasmas, the atomic kinetics model SCFLY containing an extensive set of configurations needed for solid density plasmas was applied to study atomic processes of XFEL driven systems. The code accepts the time-dependent conditions of the XFEL as input parameters, and computes time-dependent population distributions and ionization distributions self-consistently with electron temperatures and densities assuming an instantaneous equilibration of electron energies. The methods and assumptions in the atomic kinetics model and unique aspects of atomic processes in XFEL driven plasmas are described.

  9. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  10. Direct distribution model for regional aquatic acidification

    SciTech Connect

    Small, M.J.; Sutton, M.C.

    1986-12-01

    A model is developed to predict the regional distribution of lake acidification and its effect on fish survival. The model predicts the effect of changes in acid deposition rates on the mean and variance of the regional distribution of lake alkalinity using empirical weathering models with variable weathering factors. The regional distribution of lake alkalinity is represented by a three-parameters lognormal distribution. The regional pH distribution is derived using an explicit pH-alkalinity relationship. The predicted pH distribution is combined with a fish presence-absence relationship to predict the fraction of lakes in a region able to support fish. The model is illustrated with a set of 1014 lakes in the Adirondack Park region of New York State. Significant needs for future research for regional aggregation of aquatic acidification models are identified.

  11. Image sequence coding using 3D scene models

    NASA Astrophysics Data System (ADS)

    Girod, Bernd

    1994-09-01

    The implicit and explicit use of 3D models for image sequence coding is discussed. For implicit use, a 3D model can be incorporated into motion compensating prediction. A scheme that estimates the displacement vector field with a rigid body motion constraint by recovering epipolar lines from an unconstrained displacement estimate and then repeating block matching along the epipolar line is proposed. Experimental results show that an improved displacement vector field can be obtained with a rigid body motion constraint. As an example for explicit use, various results with a facial animation model for videotelephony are discussed. A 13 X 16 B-spline mask can be adapted automatically to individual faces and is used to generate facial expressions based on FACS. A depth-from-defocus range camera suitable for real-time facial motion tracking is described. Finally, the real-time facial animation system `Traugott' is presented that has been used to generate several hours of broadcast video. Experiments suggest that a videophone system based on facial animation might require a transmission bitrate of 1 kbit/s or below.

  12. Modeling Vortex Generators in the Wind-US Code

    NASA Technical Reports Server (NTRS)

    Dudek, Julianne C.

    2010-01-01

    A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.

  13. An improved coding method of quantum key distribution protocols based on Fibonacci-valued OAM entangled states

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Luo, Ming-Xing; Zhan, Cheng; Pieprzyk, Josef; Orgun, Mehmet A.

    2017-09-01

    We propose an improved coding method of quantum key distribution protocols based on a recently proposed (QKD) protocol using Fibonacci-valued OAM entangled states. To be exact, we define a new class of Fibonacci-matrix coding and Fibonacci-matrix representation and show how they can be used to extend and improve the original protocols. Compared with the original protocols, our protocol not only greatly improves the encoding efficiency but also has verifiability.

  14. Modeling steady sea water intrusion with single-density groundwater codes.

    PubMed

    Bakker, Mark; Schaars, Frans

    2013-01-01

    Steady interface flow in heterogeneous aquifer systems is simulated with single-density groundwater codes by using transformed values for the hydraulic conductivity and thickness of the aquifers and aquitards. For example, unconfined interface flow may be simulated with a transformed model by setting the base of the aquifer to sea level and by multiplying the hydraulic conductivity with 41 (for sea water density of 1025 kg/m(3)). Similar transformations are derived for unconfined interface flow with a finite aquifer base and for confined multi-aquifer interface flow. The head and flow distribution are identical in the transformed and original model domains. The location of the interface is obtained through application of the Ghyben-Herzberg formula. The transformed problem may be solved with a single-density code that is able to simulate unconfined flow where the saturated thickness is a linear function of the head and, depending on the boundary conditions, the code needs to be able to simulate dry cells where the saturated thickness is zero. For multi-aquifer interface flow, an additional requirement is that the code must be able to handle vertical leakage in situations where flow in an aquifer is unconfined while there is also flow in the aquifer directly above it. Specific examples and limitations are discussed for the application of the approach with MODFLOW. Comparisons between exact interface flow solutions and MODFLOW solutions of the transformed model domain show good agreement. The presented approach is an efficient alternative to running transient sea water intrusion models until steady state is reached.

  15. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  16. Parallel execution of a three-dimensional, chemically reacting, Navier-Stokes code on distributed-memory machines

    NASA Technical Reports Server (NTRS)

    Otto, John C.

    1993-01-01

    This paper describes the parallel version of the three-dimensional, chemically reacting, computational fluid dynamics (CFD) code, SPARK. This work was performed on the Intel iPSC/860-based parallel computers. The SPARK code utilizes relatively simple explicit numerical algorithms, but models complex chemical reactions. The code solves the equations over a regular structured mesh so a simple dam decomposition is used to assign work to the individual processors. The explicit nature of the algorithm, combined with the computational intensity of the chemistry calculations, results in a very low communication-to-computation ratio when compared to typical CFD codes. The efficiency of the parallel code is examined and shown to be about 65 percent when the problem size is scaled with the number of processors. Two low-angle wall-jet injection cases are solved to demonstrate the capability of the parallel code for solving large problems efficiently.

  17. Subgrid Combustion Modeling for the Next Generation National Combustion Code

    NASA Technical Reports Server (NTRS)

    Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher

    2003-01-01

    In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.

  18. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  19. PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations

    NASA Astrophysics Data System (ADS)

    Elmaghraby, Elsayed K.

    2009-09-01

    The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon

  20. Validated modeling of distributed energy resources at distribution voltages : LDRD project 38672.

    SciTech Connect

    Ralph, Mark E.; Ginn, Jerry W.

    2004-03-01

    A significant barrier to the deployment of distributed energy resources (DER) onto the power grid is uncertainty on the part of utility engineers regarding impacts of DER on their distribution systems. Because of the many possible combinations of DER and local power system characteristics, these impacts can most effectively be studied by computer simulation. The goal of this LDRD project was to develop and experimentally validate models of transient and steady state source behavior for incorporation into utility distribution analysis tools. Development of these models had not been prioritized either by the distributed-generation industry or by the inverter industry. A functioning model of a selected inverter-based DER was developed in collaboration with both the manufacturer and industrial power systems analysts. The model was written in the PSCAD simulation language, a variant of the ElectroMagnetic Transients Program (EMTP), a code that is widely used and accepted by utilities. A stakeholder team was formed and a methodology was established to address the problem. A list of detailed DER/utility interaction concerns was developed and prioritized. The list indicated that the scope of the problem significantly exceeded resources available for this LDRD project. As this work progresses under separate funding, the model will be refined and experimentally validated. It will then be incorporated in utility distribution analysis tools and used to study a variety of DER issues. The key next step will be design of the validation experiments.

  1. Distance distribution in configuration-model networks

    NASA Astrophysics Data System (ADS)

    Nitzan, Mor; Katzav, Eytan; Kühn, Reimer; Biham, Ofer

    2016-06-01

    We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial, and power-law degree distributions. The mean, mode, and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions.

  2. Mutation-selection models of coding sequence evolution with site-heterogeneous amino acid fitness profiles

    PubMed Central

    Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas

    2010-01-01

    Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949

  3. Modeling of MHD edge containment in strip casting with ELEKTRA and CaPS-EM codes

    SciTech Connect

    Chang, F. C.

    2000-01-12

    This paper presents modeling studies of magnetohydrodynamics analysis in twin-roll casting. Argonne National Laboratory (ANL) and ISPAT Inland Inc. (Inland), formerly Inland Steel Co., have worked together to develop a three-dimensional (3-D) computer model that can predict eddy currents, fluid flows, and liquid metal containment of an electromagnetic (EM) edge containment device. The model was verified by comparing predictions with experimental results of liquid metal containment and fluid flow in EM edge dams (EMDs) that were designed at Inland for twin-roll casting. This mathematical model can significantly shorten casting research on the use of EM fields for liquid metal containment and control. The model can optimize the EMD design so it is suitable for application, and minimize expensive time-consuming full-scale testing. Numerical simulation was performed by coupling a 3-D finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve heat transfer, fluid flow, and turbulence transport in a casting process that involves EM fields. ELEKTRA can predict the eddy-current distribution and the EM forces in complex geometries. CaPS-EM can model fluid flows with free surfaces. The computed 3-D magnetic fields and induced eddy currents in ELEKTRA are used as input to temperature- and flow-field computations in CaPS-EM. Results of the numerical simulation compared well with measurements obtained from both static and dynamic tests.

  4. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    SciTech Connect

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  5. Evaluation of turbulence models in the PARC code for transonic diffuser flows

    NASA Technical Reports Server (NTRS)

    Georgiadis, N. J.; Drummond, J. E.; Leonard, B. P.

    1994-01-01

    Flows through a transonic diffuser were investigated with the PARC code using five turbulence models to determine the effects of turbulence model selection on flow prediction. Three of the turbulence models were algebraic models: Thomas (the standard algebraic turbulence model in PARC), Baldwin-Lomax, and Modified Mixing Length-Thomas (MMLT). The other two models were the low Reynolds number k-epsilon models of Chien and Speziale. Three diffuser flows, referred to as the no-shock, weak-shock, and strong-shock cases, were calculated with each model to conduct the evaluation. Pressure distributions, velocity profiles, locations of shocks, and maximum Mach numbers in the duct were the flow quantities compared. Overall, the Chien k-epsilon model was the most accurate of the five models when considering results obtained for all three cases. However, the MMLT model provided solutions as accurate as the Chien model for the no-shock and the weak-shock cases, at a substantially lower computational cost (measured in CPU time required to obtain converged solutions). The strong shock flow, which included a region of shock-induced flow separation, was only predicted well by the two k-epsilon models.

  6. Coding conventions and principles for a National Land-Change Modeling Framework

    USGS Publications Warehouse

    Donato, David I.

    2017-07-14

    This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.

  7. A New Approach to Model Pitch Perception Using Sparse Coding

    PubMed Central

    Furst, Miriam; Barak, Omri

    2017-01-01

    Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments. PMID:28099436

  8. A New Approach to Model Pitch Perception Using Sparse Coding.

    PubMed

    Barzelay, Oded; Furst, Miriam; Barak, Omri

    2017-01-01

    Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.

  9. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the

  10. Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2005-01-01

    A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  11. Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization

    NASA Astrophysics Data System (ADS)

    Luo, Chuanfu; Sommer, Jens-Uwe

    2009-08-01

    We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program

  12. Models for the hotspot distribution

    NASA Technical Reports Server (NTRS)

    Jurdy, Donna M.; Stefanick, Michael

    1990-01-01

    Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.

  13. Thermal-hydraulic characteristics of a Westinghouse Model 51 steam generator. Volume 2. Appendix A, numerical results. Interim report. [CALIPSOS code numerical data

    SciTech Connect

    Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.

    1981-03-01

    The Comparative Thermal-Hydraulic Evaluation of Steam Generators program represents an analytical investigation of the thermal-hydraulic characteristics of four PWR steam generators. The analytical tool utilized in this investigation is the CALIPSOS code, a three-dimensional flow distribution code. This report presents the steady state thermal-hydraulic characteristics on the secondary side of a Westinghouse Model 51 steam generator. Details of the CALIPSOS model with accompanying assumptions, operating parameters, and transport correlations are identified. Comprehensive graphical and numerical results are presented to facilitate the desired comparison with other steam generators analyzed by the same flow distribution code.

  14. Semantic-preload video model based on VOP coding

    NASA Astrophysics Data System (ADS)

    Yang, Jianping; Zhang, Jie; Chen, Xiangjun

    2013-03-01

    In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in

  15. Applications of Transport/Reaction Codes to Problems in Cell Modeling

    SciTech Connect

    MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.

    2001-11-01

    We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

  16. Graphical Models via Univariate Exponential Family Distributions

    PubMed Central

    Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong

    2016-01-01

    Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498

  17. Cost effectiveness of the 1993 Model Energy Code in Colorado

    SciTech Connect

    Lucas, R.G.

    1995-06-01

    This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1993 Model Energy Code (MEC) building thermal-envelope requirements for single-family homes in Colorado. The goal of this analysis was to compare the cost effectiveness of the 1993 MEC to current construction practice in Colorado based on an objective methodology that determined the total life-cycle cost associated with complying with the 1993 MEC. This analysis was performed for the range of Colorado climates. The costs and benefits of complying with the 1993 NIEC were estimated from the consumer`s perspective. The time when the homeowner realizes net cash savings (net positive cash flow) for homes built in accordance with the 1993 MEC was estimated to vary from 0.9 year in Steamboat Springs to 2.4 years in Denver. Compliance with the 1993 MEC was estimated to increase first costs by $1190 to $2274, resulting in an incremental down payment increase of $119 to $227 (at 10% down). The net present value of all costs and benefits to the home buyer, accounting for the mortgage and taxes, varied from a savings of $1772 in Springfield to a savings of $6614 in Steamboat Springs. The ratio of benefits to costs ranged from 2.3 in Denver to 3.8 in Steamboat Springs.

  18. Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code

    NASA Astrophysics Data System (ADS)

    Robson Rocha, Will; Pilling, Sergio

    The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices

  19. On distributed memory MPI-based parallelization of SPH codes in massive HPC context

    NASA Astrophysics Data System (ADS)

    Oger, G.; Le Touzé, D.; Guibert, D.; de Leffe, M.; Biddiscombe, J.; Soumagne, J.; Piccinali, J.-G.

    2016-03-01

    Most of particle methods share the problem of high computational cost and in order to satisfy the demands of solvers, currently available hardware technologies must be fully exploited. Two complementary technologies are now accessible. On the one hand, CPUs which can be structured into a multi-node framework, allowing massive data exchanges through a high speed network. In this case, each node is usually comprised of several cores available to perform multithreaded computations. On the other hand, GPUs which are derived from the graphics computing technologies, able to perform highly multi-threaded calculations with hundreds of independent threads connected together through a common shared memory. This paper is primarily dedicated to the distributed memory parallelization of particle methods, targeting several thousands of CPU cores. The experience gained clearly shows that parallelizing a particle-based code on moderate numbers of cores can easily lead to an acceptable scalability, whilst a scalable speedup on thousands of cores is much more difficult to obtain. The discussion revolves around speeding up particle methods as a whole, in a massive HPC context by making use of the MPI library. We focus on one particular particle method which is Smoothed Particle Hydrodynamics (SPH), one of the most widespread today in the literature as well as in engineering.

  20. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  1. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  2. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  3. Scaling properties and fractality in the distribution of coding segments in eukaryotic genomes revealed through a block entropy approach

    NASA Astrophysics Data System (ADS)

    Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis

    2010-11-01

    Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in

  4. An Analytical Model for BDS B1 Spreading Code Self-Interference Evaluation Considering NH Code Effects

    PubMed Central

    Zhang, Xin; Zhan, Xingqun; Feng, Shaojun; Ochieng, Washington

    2017-01-01

    The short spreading code used by the BeiDou Navigation Satellite System (BDS) B1-I or GPS Coarse/Acquistiion (C/A) can cause aggregately undesirable cross-correlation between signals within each single constellation. This GPS-to-GPS or BDS-to-BDS correlation is referred to as self-interference. A GPS C/A code self-interference model is extended to propose a self-interference model for BDS B1, taking into account the unique feature of the B1-I signal transmitted by BDS medium Earth orbit (MEO) and inclined geosynchronous orbit (IGSO) satellites—an extra Neumann-Hoffmann (NH) code. Currently there is no analytical model for BDS self-interference and a simple three parameter analytical model is proposed. The model is developed by calculating the spectral separation coefficient (SSC), converting SSC to equivalent white noise power level, and then using this to calculate effective carrier-to-noise density ratio. Cyclostationarity embedded in the signal offers the proposed model additional accuracy in predicting B1-I self-interference. Hardware simulator data are used to validate the model. Software simulator data are used to show the impact of self-interference on a typical BDS receiver including the finding that self-interference effect is most significant when the differential Doppler between desired and undesired signal is zero. Simulation results show the aggregate noise caused by just two undesirable spreading codes on a single desirable signal could lift the receiver noise floor by 3.83 dB under extreme C/N0 (carrier to noise density ratio) conditions (around 20 dB-Hz). This aggregate noise has the potential to increase code tracking standard deviation by 11.65 m under low C/N0 (15–19 dB-Hz) conditions and should therefore, be avoided for high-sensitivity applications. Although the findings refer to Beidou system, the principle weakness of the short codes illuminated here are valid for other satellite navigation systems. PMID:28333120

  5. An Analytical Model for BDS B1 Spreading Code Self-Interference Evaluation Considering NH Code Effects.

    PubMed

    Zhang, Xin; Zhan, Xingqun; Feng, Shaojun; Ochieng, Washington

    2017-03-23

    The short spreading code used by the BeiDou Navigation Satellite System (BDS) B1-I or GPS Coarse/Acquistiion (C/A) can cause aggregately undesirable cross-correlation between signals within each single constellation. This GPS-to-GPS or BDS-to-BDS correlation is referred to as self-interference. A GPS C/A code self-interference model is extended to propose a self-interference model for BDS B1, taking into account the unique feature of the B1-I signal transmitted by BDS medium Earth orbit (MEO) and inclined geosynchronous orbit (IGSO) satellites-an extra Neumann-Hoffmann (NH) code. Currently there is no analytical model for BDS self-interference and a simple three parameter analytical model is proposed. The model is developed by calculating the spectral separation coefficient (SSC), converting SSC to equivalent white noise power level, and then using this to calculate effective carrier-to-noise density ratio. Cyclostationarity embedded in the signal offers the proposed model additional accuracy in predicting B1-I self-interference. Hardware simulator data are used to validate the model. Software simulator data are used to show the impact of self-interference on a typical BDS receiver including the finding that self-interference effect is most significant when the differential Doppler between desired and undesired signal is zero. Simulation results show the aggregate noise caused by just two undesirable spreading codes on a single desirable signal could lift the receiver noise floor by 3.83 dB under extreme C/N₀ (carrier to noise density ratio) conditions (around 20 dB-Hz). This aggregate noise has the potential to increase code tracking standard deviation by 11.65 m under low C/N₀ (15-19 dB-Hz) conditions and should therefore, be avoided for high-sensitivity applications. Although the findings refer to Beidou system, the principle weakness of the short codes illuminated here are valid for other satellite navigation systems.

  6. Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber

    NASA Astrophysics Data System (ADS)

    Yuen, A.; Bombardelli, F. A.

    2014-12-01

    Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on

  7. Sodium/water pool-deposit bed model of the CONACS code. [LMFBR

    SciTech Connect

    Peak, R.D.

    1983-12-17

    A new Pool-Bed model of the CONACS (Containment Analysis Code System) code represents a major advance over the pool models of other containment analysis code (NABE code of France, CEDAN code of Japan and CACECO and CONTAIN codes of the United States). This new model advances pool-bed modeling because of the number of significant materials and processes which are included with appropriate rigor. This CONACS pool-bed model maintains material balances for eight chemical species (C, H/sub 2/O, Na, NaH, Na/sub 2/O, Na/sub 2/O/sub 2/, Na/sub 2/CO/sub 3/ and NaOH) that collect in the stationary liquid pool on the floor and in the desposit bed on the elevated shelf of the standard CONACS analysis cell.

  8. New trends in species distribution modelling

    USGS Publications Warehouse

    Zimmermann, Niklaus E.; Edwards, Thomas C.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian

    2010-01-01

    Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.

  9. A Robust Model-Based Coding Technique for Ultrasound Video

    NASA Technical Reports Server (NTRS)

    Docef, Alen; Smith, Mark J. T.

    1995-01-01

    This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.

  10. Caveats for correlative species distribution modeling

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Stohlgren, Thomas J.; Kumar, Sunil; Morisette, Jeffrey T.; Holcombe, Tracy R.

    2015-01-01

    Correlative species distribution models are becoming commonplace in the scientific literature and public outreach products, displaying locations, abundance, or suitable environmental conditions for harmful invasive species, threatened and endangered species, or species of special concern. Accurate species distribution models are useful for efficient and adaptive management and conservation, research, and ecological forecasting. Yet, these models are often presented without fully examining or explaining the caveats for their proper use and interpretation and are often implemented without understanding the limitations and assumptions of the model being used. We describe common pitfalls, assumptions, and caveats of correlative species distribution models to help novice users and end users better interpret these models. Four primary caveats corresponding to different phases of the modeling process, each with supporting documentation and examples, include: (1) all sampling data are incomplete and potentially biased; (2) predictor variables must capture distribution constraints; (3) no single model works best for all species, in all areas, at all spatial scales, and over time; and (4) the results of species distribution models should be treated like a hypothesis to be tested and validated with additional sampling and modeling in an iterative process.

  11. Modeling Gas Distribution in Protoplanetary Accretion Disks

    NASA Astrophysics Data System (ADS)

    Kronberg, Martin; Lewis, Josiah; Brittain, Sean

    2010-07-01

    Protoplanetary accretion disks are disks of dust and gas which surround and feed material onto a forming star in the earliest stages of its evolution. One of the most useful methods for studying these disks is near infrared spectroscopy of rovibrational CO emission. This paper presents the methods in which synthetically generated spectra are modeled and fit to spectral data gathered from protoplanetary disks. This paper also discussed the methods in which this code can be improved by modifying the code to run a Monte Carlo analysis of best fit across the CONDOR cluster at Clemson University, thereby allowing for the creation of a catalog of protoplanetary disks with detailed information about them as gathered from the model.

  12. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    SciTech Connect

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  13. Soft-edged magnet models for higher-order beam-optics map codes

    NASA Astrophysics Data System (ADS)

    Walstrom, P. L.

    2004-02-01

    Continuously varying surface and volume source-density distributions are used to model magnetic fields inside of cylindrical volumes. From these distributions, a package of subroutines computes on-axis generalized gradients and their derivatives at arbitrary points on the magnet axis for input to the numerical map-generating subroutines of the Lie-algebraic map code Marylie. In the present version of the package, the magnet menu includes: (1) cylindrical current-sheet or radially thick current distributions with either open boundaries or with a surrounding cylindrical boundary with normal field lines (which models high-permeability iron), (2) Halbach-type permanent multipole magnets, either as sheet magnets or as radially thick magnets, (3) modeling of arbitrary fields inside a cylinder by use of a fictitious current sheet. The subroutines provide on-axis gradients and their z derivatives to essentially arbitrary order, although in the present third- and fifth-order Marylie only the zeroth through sixth derivatives are needed. The formalism is especially useful in beam-optics applications, such as magnetic lenses, where realistic treatment of fringe-field effects is needed.

  14. Software Model Checking for Verifying Distributed Algorithms

    DTIC Science & Technology

    2014-10-28

    Verification procedure is an intelligent exhaustive search of the state space of the design Model Checking 6 Verifying Synchronous Distributed App...Distributed App Sagar Chaki, June 11, 2014 © 2014 Carnegie Mellon University Tool Usage Project webpage (http://mcda.googlecode.com) • Tutorial

  15. Indiana Distributive Education Competency Based Model.

    ERIC Educational Resources Information Center

    Davis, Rod; And Others

    This Indiana distributive education competency-based curriculum model is designed to help teachers and local administrators plan and conduct a comprehensive marketing and distributive education program. It is divided into three levels--one level for each year of a three-year program. The competencies common to a variety of marketing and…

  16. Dynamic causal modelling of distributed electromagnetic responses

    PubMed Central

    Daunizeau, Jean; Kiebel, Stefan J.; Friston, Karl J.

    2009-01-01

    In this note, we describe a variant of dynamic causal modelling for evoked responses as measured with electroencephalography or magnetoencephalography (EEG and MEG). We depart from equivalent current dipole formulations of DCM, and extend it to provide spatiotemporal source estimates that are spatially distributed. The spatial model is based upon neural-field equations that model neuronal activity on the cortical manifold. We approximate this description of electrocortical activity with a set of local standing-waves that are coupled though their temporal dynamics. The ensuing distributed DCM models source as a mixture of overlapping patches on the cortical mesh. Time-varying activity in this mixture, caused by activity in other sources and exogenous inputs, is propagated through appropriate lead-field or gain-matrices to generate observed sensor data. This spatial model has three key advantages. First, it is more appropriate than equivalent current dipole models, when real source activity is distributed locally within a cortical area. Second, the spatial degrees of freedom of the model can be specified and therefore optimised using model selection. Finally, the model is linear in the spatial parameters, which finesses model inversion. Here, we describe the distributed spatial model and present a comparative evaluation with conventional equivalent current dipole (ECD) models of auditory processing, as measured with EEG. PMID:19398015

  17. Modelling 2001 lahars at Popocatépetl volcano using FLO2D numerical code

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Capra, L.

    2013-12-01

    Popocatépetl volcano is located on the central part of the Transmexican Volcanic Belt. It is one of the most active volcanoes in Mexico and endanger more than 25 million people that lives in its surroundings. In the last months, the renewal of its volcanic activity put into alert scientific community. One of the possible scenarios is the 2001 explosive activity, which was characterized by a 8 km eruptive column and the subsequent formation of pumice flows up to 4 km from the crater. Lahars were generated few hours after, remobilizing the new deposits towards NE flank of the volcano, along Huiloac Gorge, almost reaching Santiago Xalitzintla town (Capra et al., 2004). The occurrence of a similar scenario makes very important to reproduce this event to delimitate accurately lahar hazard zones. In this work, 2001 lahar deposit is modeled using FLO2D numerical code. Geophone data is used to reconstruct initial hydrograph and sediment concentration. Sensitivity study of most important parameters used by this code like Manning, and α and β coefficients was conducted in order to achieve a good simulation. Results obtained were compared with field data and demonstrated a good agreement in thickness and flow distribution. A comparison with previously published data with laharZ program (Muñoz-Salinas, 2009) is also made. Additionally, lahars with fluctuating sediment concentrations but with similar volume are simulated to observe the influence of the rheological behavior on lahar distribution.

  18. Modeling neural activity with cumulative damage distributions.

    PubMed

    Leiva, Víctor; Tejo, Mauricio; Guiraud, Pierre; Schmachtenberg, Oliver; Orio, Patricio; Marmolejo-Ramos, Fernando

    2015-10-01

    Neurons transmit information as action potentials or spikes. Due to the inherent randomness of the inter-spike intervals (ISIs), probabilistic models are often used for their description. Cumulative damage (CD) distributions are a family of probabilistic models that has been widely considered for describing time-related cumulative processes. This family allows us to consider certain deterministic principles for modeling ISIs from a probabilistic viewpoint and to link its parameters to values with biological interpretation. The CD family includes the Birnbaum-Saunders and inverse Gaussian distributions, which possess distinctive properties and theoretical arguments useful for ISI description. We expand the use of CD distributions to the modeling of neural spiking behavior, mainly by testing the suitability of the Birnbaum-Saunders distribution, which has not been studied in the setting of neural activity. We validate this expansion with original experimental and simulated electrophysiological data.

  19. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    SciTech Connect

    Yu, Charley; Gnanapragasam, Emmanuel; Cheng, Jing-Jy; Kamboj, Sunita; Chen, Shih-Yew

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  20. Modelling Root Systems Using Oriented Density Distributions

    NASA Astrophysics Data System (ADS)

    Dupuy, Lionel X.

    2011-09-01

    Root architectural models are essential tools to understand how plants access and utilize soil resources during their development. However, root architectural models use complex geometrical descriptions of the root system and this has limitations to model interactions with the soil. This paper presents the development of continuous models based on the concept of oriented density distribution function. The growth of the root system is built as a hierarchical system of partial differential equations (PDEs) that incorporate single root growth parameters such as elongation rate, gravitropism and branching rate which appear explicitly as coefficients of the PDE. Acquisition and transport of nutrients are then modelled by extending Darcy's law to oriented density distribution functions. This framework was applied to build a model of the growth and water uptake of barley root system. This study shows that simplified and computer effective continuous models of the root system development can be constructed. Such models will allow application of root growth models at field scale.

  1. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2011-09-30

    channel interference mitigation for underwater acoustic MIMO-OFDM. 3) Turbo equalization for OFDM modulated physical layer network coding. 4) Blind CFO...Localization and tracking of underwater physical systems. 7) NAMS: A networked acoustic modem system for underwater applications . 8) OFDM receiver design in...3) Turbo Equalization for OFDM Modulated Physical Layer Network Coding. We have investigated a practical orthogonal frequency division multiplexing

  2. Incorporating uncertainty in predictive species distribution modelling

    PubMed Central

    Beale, Colin M.; Lennon, Jack J.

    2012-01-01

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387

  3. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  4. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    PubMed Central

    2011-01-01

    Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible

  5. Sample sizes and model comparison metrics for species distribution models

    Treesearch

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  6. Challenges and perspectives for species distribution modelling in the neotropics

    PubMed Central

    Kamino, Luciana H. Y.; Stehmann, João Renato; Amaral, Silvana; De Marco, Paulo; Rangel, Thiago F.; de Siqueira, Marinez F.; De Giovanni, Renato; Hortal, Joaquín

    2012-01-01

    The workshop ‘Species distribution models: applications, challenges and perspectives’ held at Belo Horizonte (Brazil), 29–30 August 2011, aimed to review the state-of-the-art in species distribution modelling (SDM) in the neotropical realm. It brought together researchers in ecology, evolution, biogeography and conservation, with different backgrounds and research interests. The application of SDM in the megadiverse neotropics—where data on species occurrences are scarce—presents several challenges, involving acknowledging the limitations imposed by data quality, including surveys as an integral part of SDM studies, and designing the analyses in accordance with the question investigated. Specific solutions were discussed, and a code of good practice in SDM studies and related field surveys was drafted. PMID:22031720

  7. Model-based image coding using deformable 3D model for face-to-face communications

    NASA Astrophysics Data System (ADS)

    Cai, Defu; Liang, Huiying; Wang, Xiangwen

    1994-09-01

    The model-based image coding might be the potential method for very/ultra low bit rate visual communications. However, some problems still remain for video practice, such as a finer wireframe 3-D model, precise rule for facial expressions analyzing, and automatic feature points extraction for real time application, etc. This paper proposes a feasible scheme of model-based image coding based on a deformable model which would be suitable for very/ultra low bit rates transmission. Meanwhile, some key techniques, such as automatic face feature point extraction based on a priori knowledge for real time applications and the method of AUs separation of a face on various expressions, is given.

  8. A probability distribution model for rain rate

    NASA Technical Reports Server (NTRS)

    Kedem, Benjamin; Pavlopoulos, Harry; Guan, Xiaodong; Short, David A.

    1994-01-01

    A systematic approach is suggested for modeling the probability distribution of rain rate. Rain rate, conditional on rain and averaged over a region, is modeled as a temporally homogeneous diffusion process with appropiate boundary conditions. The approach requires a drift coefficient-conditional average instantaneous rate of change of rain intensity-as well as a diffusion coefficient-the conditional average magnitude of the rate of growth and decay of rain rate about its drift. Under certain assumptions on the drift and diffusion coefficients compatible with rain rate, a new parametric family-containing the lognormal distribution-is obtained for the continuous part of the stationary limit probability distribution. The family is fitted to tropical rainfall from Darwin and Florida, and it is found that the lognormal distribution provides adequate fits as compared with other members of the family and also with the gamma distribution.

  9. Modelling lifetime data with multivariate Tweedie distribution

    NASA Astrophysics Data System (ADS)

    Nor, Siti Rohani Mohd; Yusof, Fadhilah; Bahar, Arifah

    2017-05-01

    This study aims to measure the dependence between individual lifetimes by applying multivariate Tweedie distribution to the lifetime data. Dependence between lifetimes incorporated in the mortality model is a new form of idea that gives significant impact on the risk of the annuity portfolio which is actually against the idea of standard actuarial methods that assumes independent between lifetimes. Hence, this paper applies Tweedie family distribution to the portfolio of lifetimes to induce the dependence between lives. Tweedie distribution is chosen since it contains symmetric and non-symmetric, as well as light-tailed and heavy-tailed distributions. Parameter estimation is modified in order to fit the Tweedie distribution to the data. This procedure is developed by using method of moments. In addition, the comparison stage is made to check for the adequacy between the observed mortality and expected mortality. Finally, the importance of including systematic mortality risk in the model is justified by the Pearson's chi-squared test.

  10. Population distribution models: species distributions are better modeled using biologically relevant data partitions

    PubMed Central

    2011-01-01

    Background Predicting the geographic distribution of widespread species through modeling is problematic for several reasons including high rates of omission errors. One potential source of error for modeling widespread species is that subspecies and/or races of species are frequently pooled for analyses, which may mask biologically relevant spatial variation within the distribution of a single widespread species. We contrast a presence-only maximum entropy model for the widely distributed oldfield mouse (Peromyscus polionotus) that includes all available presence locations for this species, with two composite maximum entropy models. The composite models either subdivided the total species distribution into four geographic quadrants or by fifteen subspecies to capture spatially relevant variation in P. polionotus distributions. Results Despite high Area Under the ROC Curve (AUC) values for all models, the composite species distribution model of P. polionotus generated from individual subspecies models represented the known distribution of the species much better than did the models produced by partitioning data into geographic quadrants or modeling the whole species as a single unit. Conclusions Because the AUC values failed to describe the differences in the predictability of the three modeling strategies, we suggest using omission curves in addition to AUC values to assess model performance. Dividing the data of a widespread species into biologically relevant partitions greatly increased the performance of our distribution model; therefore, this approach may prove to be quite practical and informative for a wide range of modeling applications. PMID:21929792

  11. Population distribution models: species distributions are better modeled using biologically relevant data partitions.

    PubMed

    Gonzalez, Sergio C; Soto-Centeno, J Angel; Reed, David L

    2011-09-19

    Predicting the geographic distribution of widespread species through modeling is problematic for several reasons including high rates of omission errors. One potential source of error for modeling widespread species is that subspecies and/or races of species are frequently pooled for analyses, which may mask biologically relevant spatial variation within the distribution of a single widespread species. We contrast a presence-only maximum entropy model for the widely distributed oldfield mouse (Peromyscus polionotus) that includes all available presence locations for this species, with two composite maximum entropy models. The composite models either subdivided the total species distribution into four geographic quadrants or by fifteen subspecies to capture spatially relevant variation in P. polionotus distributions. Despite high Area Under the ROC Curve (AUC) values for all models, the composite species distribution model of P. polionotus generated from individual subspecies models represented the known distribution of the species much better than did the models produced by partitioning data into geographic quadrants or modeling the whole species as a single unit. Because the AUC values failed to describe the differences in the predictability of the three modeling strategies, we suggest using omission curves in addition to AUC values to assess model performance. Dividing the data of a widespread species into biologically relevant partitions greatly increased the performance of our distribution model; therefore, this approach may prove to be quite practical and informative for a wide range of modeling applications.

  12. Statistical model with a standard Gamma distribution.

    PubMed

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-01-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter lambda. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity lambda. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(lambda), where particles exchange energy in a space with an effective dimension D(lambda).

  13. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  14. Statistical model with a standard Gamma distribution

    NASA Astrophysics Data System (ADS)

    Chakraborti, Anirban; Patriarca, Marco

    2005-03-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T (λ), where particles exchange energy in a space with an effective dimension D (λ).

  15. Statistical model with a standard Γ distribution

    NASA Astrophysics Data System (ADS)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  16. The APS SASE FEL : modeling and code comparison.

    SciTech Connect

    Biedron, S. G.

    1999-04-20

    A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.

  17. Coding of odors by temporal binding within a model network of the locust antennal lobe

    PubMed Central

    Patel, Mainak J.; Rangan, Aaditya V.; Cai, David

    2013-01-01

    The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin–Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements. PMID:23630495

  18. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    SciTech Connect

    Beutler, D.E.; Halbleib, J.A. ); Knott, D.P. )

    1989-12-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude.

  19. Implementation of a simple model for linear and nonlinear mixing at unstable fluid interfaces in hydrodynamics codes

    SciTech Connect

    Ramshaw, J D

    2000-10-01

    A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.

  20. The modeling of core melting and in-vessel corium relocation in the APRIL code

    SciTech Connect

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T.

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  1. Application distribution model and related security attacks in VANET

    NASA Astrophysics Data System (ADS)

    Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian

    2013-03-01

    In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.

  2. Evaluation of a parallel FDTD code and application to modeling of light scattering by deformed red blood cells.

    PubMed

    Brock, R Scott; Hu, Xin-Hua; Yang, Ping; Lu, Jun

    2005-07-11

    A parallel Finite-Difference-Time-Domain (FDTD) code has been developed to numerically model the elastic light scattering by biological cells. Extensive validation and evaluation on various computing clusters demonstrated the high performance of the parallel code and its significant potential of reducing the computational cost of the FDTD method with low cost computer clusters. The parallel FDTD code has been used to study the problem of light scattering by a human red blood cell (RBC) of a deformed shape in terms of the angular distributions of the Mueller matrix elements. The dependence of the Mueller matrix elements on the shape and orientation of the deformed RBC has been investigated. Analysis of these data provides valuable insight on determination of the RBC shapes using the method of elastic light scattering measurements.

  3. Uncertainty Quantification and Learning in Geophysical Modeling: How Information is Coded into Dynamical Models

    NASA Astrophysics Data System (ADS)

    Gupta, H. V.

    2014-12-01

    There is a clear need for comprehensive quantification of simulation uncertainty when using geophysical models to support and inform decision-making. Further, it is clear that the nature of such uncertainty depends on the quality of information in (a) the forcing data (driver information), (b) the model code (prior information), and (c) the specific values of inferred model components that localize the model to the system of interest (inferred information). Of course, the relative quality of each varies with geophysical discipline and specific application. In this talk I will discuss a structured approach to characterizing how 'Information', and hence 'Uncertainty', is coded into the structures of physics-based geophysical models. I propose that a better understanding of what is meant by "Information", and how it is embodied in models and data, can offer a structured (less ad-hoc), robust and insightful basis for diagnostic learning through the model-data juxtaposition. In some fields, a natural consequence may be to emphasize the a priori role of System Architecture (Process Modeling) over that of the selection of System Parameterization, thereby emphasizing the more creative aspect of scientific investigation - the use of models for Discovery and Learning.

  4. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  5. Analytic modeling of aerosol size distributions

    NASA Technical Reports Server (NTRS)

    Deepack, A.; Box, G. P.

    1979-01-01

    Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.

  6. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  7. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  8. Evolutionary model of the personal income distribution

    NASA Astrophysics Data System (ADS)

    Kaldasch, Joachim

    2012-11-01

    The aim of this work is to develop a qualitative picture of the personal income distribution. Treating an economy as a self-organized system the key idea of the model is that the income distribution contains competitive and non-competitive contributions. The presented model distinguishes between three main income classes. 1. Capital income from private firms is shown to be the result of an evolutionary competition between products. A direct consequence of this competition is Gibrat’s law suggesting a lognormal income distribution for small private firms. Taking into account an additional preferential attachment mechanism for large private firms the income distribution is supplemented by a power law (Pareto) tail. 2. Due to the division of labor a diversified labor market is seen as a non-competitive market. In this case wage income exhibits an exponential distribution. 3. Also included is income from a social insurance system. It can be approximated by a Gaussian peak. A consequence of this theory is that for short time intervals a fixed ratio of total labor (total capital) to net income exists (Cobb-Douglas relation). A comparison with empirical high resolution income data confirms this pattern of the total income distribution. The theory suggests that competition is the ultimate origin of the uneven income distribution.

  9. Final Technical Report for SBIR entitled Four-Dimensional Finite-Orbit-Width Fokker-Planck Code with Sources, for Neoclassical/Anomalous Transport Simulation of Ion and Electron Distributions

    SciTech Connect

    Harvey, R. W.; Petrov, Yu. V.

    2013-12-03

    Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code which has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker

  10. Secret information reconciliation based on punctured low-density parity-check codes for continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Jiang, Xue-Qin; Huang, Peng; Huang, Duan; Lin, Dakai; Zeng, Guihua

    2017-02-01

    Achieving information theoretic security with practical complexity is of great interest to continuous-variable quantum key distribution in the postprocessing procedure. In this paper, we propose a reconciliation scheme based on the punctured low-density parity-check (LDPC) codes. Compared to the well-known multidimensional reconciliation scheme, the present scheme has lower time complexity. Especially when the chosen punctured LDPC code achieves the Shannon capacity, the proposed reconciliation scheme can remove the information that has been leaked to an eavesdropper in the quantum transmission phase. Therefore, there is no information leaked to the eavesdropper after the reconciliation stage. This indicates that the privacy amplification algorithm of the postprocessing procedure is no more needed after the reconciliation process. These features lead to a higher secret key rate, optimal performance, and availability for the involved quantum key distribution scheme.

  11. Modeling Laboratory Astrophysics Experiments using the CRASH code

    NASA Astrophysics Data System (ADS)

    Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team

    2013-10-01

    The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  12. Recent developments in DYNSUB: New models, code optimization and parallelization

    SciTech Connect

    Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.

    2013-07-01

    DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)

  13. Advanced Distribution Network Modelling with Distributed Energy Resources

    NASA Astrophysics Data System (ADS)

    O'Connell, Alison

    The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a

  14. A vectorized Monte Carlo code for modeling photon transport in SPECT

    SciTech Connect

    Smith, M.F. ); Floyd, C.E. Jr.; Jaszczak, R.J. Department of Radiology, Duke University Medical Center, Durham, North Carolina 27710 )

    1993-07-01

    A vectorized Monte Carlo computer code has been developed for modeling photon transport in single photon emission computed tomography (SPECT). The code models photon transport in a uniform attenuating region and photon detection by a gamma camera. It is adapted from a history-based Monte Carlo code in which photon history data are stored in scalar variables and photon histories are computed sequentially. The vectorized code is written in FORTRAN77 and uses an event-based algorithm in which photon history data are stored in arrays and photon history computations are performed within DO loops. The indices of the DO loops range over the number of photon histories, and these loops may take advantage of the vector processing unit of our Stellar GS1000 computer for pipelined computations. Without the use of the vector processor the event-based code is faster than the history-based code because of numerical optimization performed during conversion to the event-based algorithm. When only the detection of unscattered photons is modeled, the event-based code executes 5.1 times faster with the use of the vector processor than without; when the detection of scattered and unscattered photons is modeled the speed increase is a factor of 2.9. Vectorization is a valuable way to increase the performance of Monte Carlo code for modeling photon transport in SPECT.

  15. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  16. Distributed Wind Diffusion Model Overview (Presentation)

    SciTech Connect

    Preus, R.; Drury, E.; Sigrin, B.; Gleason, M.

    2014-07-01

    Distributed wind market demand is driven by current and future wind price and performance, along with several non-price market factors like financing terms, retail electricity rates and rate structures, future wind incentives, and others. We developed a new distributed wind technology diffusion model for the contiguous United States that combines hourly wind speed data at 200m resolution with high resolution electricity load data for various consumer segments (e.g., residential, commercial, industrial), electricity rates and rate structures for utility service territories, incentive data, and high resolution tree cover. The model first calculates the economics of distributed wind at high spatial resolution for each market segment, and then uses a Bass diffusion framework to estimate the evolution of market demand over time. The model provides a fundamental new tool for characterizing how distributed wind market potential could be impacted by a range of future conditions, such as electricity price escalations, improvements in wind generator performance and installed cost, and new financing structures. This paper describes model methodology and presents sample results for distributed wind market potential in the contiguous U.S. through 2050.

  17. A convolutional code-based sequence analysis model and its application.

    PubMed

    Liu, Xiao; Geng, Xiaoli

    2013-04-16

    A new approach for encoding DNA sequences as input for DNA sequence analysis is proposed using the error correction coding theory of communication engineering. The encoder was designed as a convolutional code model whose generator matrix is designed based on the degeneracy of codons, with a codon treated in the model as an informational unit. The utility of the proposed model was demonstrated through the analysis of twelve prokaryote and nine eukaryote DNA sequences having different GC contents. Distinct differences in code distances were observed near the initiation and termination sites in the open reading frame, which provided a well-regulated characterization of the DNA sequences. Clearly distinguished period-3 features appeared in the coding regions, and the characteristic average code distances of the analyzed sequences were approximately proportional to their GC contents, particularly in the selected prokaryotic organisms, presenting the potential utility as an added taxonomic characteristic for use in studying the relationships of living organisms.

  18. Stark effect modeling in the detailed opacity code SCO-RCG

    NASA Astrophysics Data System (ADS)

    Pain, J.-C.; Gilleron, F.; Gilles, D.

    2016-05-01

    The broadening of lines by Stark effect is an important tool for inferring electron density and temperature in plasmas. Stark-effect calculations often rely on atomic data (transition rates, energy levels,...) not always exhaustive and/or valid for isolated atoms. We present a recent development in the detailed opacity code SCO-RCG for K-shell spectroscopy (hydrogen- and helium-like ions). This approach is adapted from the work of Gilles and Peyrusse. Neglecting non-diagonal terms in dipolar and collision operators, the line profile is expressed as a sum of Voigt functions associated to the Stark components. The formalism relies on the use of parabolic coordinates within SO(4) symmetry. The relativistic fine-structure of Lyman lines is included by diagonalizing the hamiltonian matrix associated to quantum states having the same principal quantum number n. The resulting code enables one to investigate plasma environment effects, the impact of the microfield distribution, the decoupling between electron and ion temperatures and the role of satellite lines (such as Li-like 1snℓn'ℓ' — 1s 2 nℓ, Be-like, etc.). Comparisons with simpler and widely-used semi-empirical models are presented.

  19. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    ERIC Educational Resources Information Center

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  20. Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.

    ERIC Educational Resources Information Center

    Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.

    1998-01-01

    As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…

  1. Mathematical models and illustrative results for the RINGBEARER II monopole/dipole beam-propagation code

    SciTech Connect

    Chambers, F.W.; Masamitsu, J.A.; Lee, E.P.

    1982-05-24

    RINGBEARER II is a linearized monopole/dipole particle simulation code for studying intense relativistic electron beam propagation in gas. In this report the mathematical models utilized for beam particle dynamics and pinch field computation are delineated. Difficulties encountered in code operations and some remedies are discussed. Sample output is presented detailing the diagnostics and the methods of display and analysis utilized.

  2. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Astrophysics Data System (ADS)

    Chitsomboon, Tawit

    1992-02-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  3. Implementation of a kappa-epsilon turbulence model to RPLUS3D code

    NASA Technical Reports Server (NTRS)

    Chitsomboon, Tawit

    1992-01-01

    The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.

  4. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    SciTech Connect

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  5. Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation

    DTIC Science & Technology

    2009-05-20

    Real-time C Code Generation in Ptolemy II for the Giotto Model of Computation Shanna-Shaye Forbes Electrical Engineering and Computer Sciences...MAY 2009 2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE Real-time C Code Generation in Ptolemy II for the Giotto...periodic and there are multiple modes of operation. Ptolemy II is a university based open source modeling and simulation framework that supports model

  6. Evaluation of Computational Codes for Underwater Hull Analysis Model Applications

    DTIC Science & Technology

    2014-02-05

    file is text and could be created by the user, the format is very exacting and difficult to get correct. This makes BEASY GiD very useful. Rhino3D...user can manually write the text material data file, but it is exceedingly difficult to get the format precisely right. Figure 4-1: Screen shot of...code that is an add-on to the software SolidWorks [8]. It runs on Windows on a laptop, desktop, or workstation. It is not portable to Macintosh or

  7. Modeling and planning distributed energy systems online

    NASA Astrophysics Data System (ADS)

    Wieler, Susana

    Sustainable energy is a core concern worldwide for the foreseeable future. Technologically, its key trends are distributed and renewable energy resources and smart grid capabilities. At the same time, a global need for sustainable energy is meeting increasingly diverse energy policy and economics. To plan with such complex contexts and systems, a novel distributed energy software tool and its initial implementation is presented: the Energy Systems Evaluator Online (ESEO). Its contributions include: (1) A flexible model framework that can simulate current and expected distributed energy systems; (2) An architecture specifying the modular design needed for distributed energy planning software in general; (3) A working implementation as the first general energy planning tool deployed via the Internet with collaborative capabilities.

  8. Income distribution: An adaptive heterogeneous model

    NASA Astrophysics Data System (ADS)

    da Silva, L. C.; de Figueirêdo, P. H.

    2014-02-01

    In this communication an adaptive process is introduced into a many-agent model for closed economic system in order to establish general features of income distribution. In this new version agents are able to modify their exchange parameter ωi of resources through an adaptive process. The conclusions indicate that assuming an instantaneous learning behavior of all agents a Γ-distribution for income is reproduced while a frozen behavior establishes a Pareto’s distribution for income with an exponent ν=0.94±0.02. A third case occurs when a heterogeneous “inertia” behavior is introduced leading us to a Γ-distribution at the low income regime and a power-law decay for the large income values with an exponent ν=2.05±0.05. This method enables investigation of the resources flux in the economic environment and produces also bounding values for the Gini index comparable with data evidences.

  9. XSOR codes users manual

    SciTech Connect

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.

  10. Hot Water Distribution System Model Enhancements

    SciTech Connect

    Hoeschele, M.; Weitzel, E.

    2012-11-01

    This project involves enhancement of the HWSIM distribution system model to more accurately model pipe heat transfer. Recent laboratory testing efforts have indicated that the modeling of radiant heat transfer effects is needed to accurately characterize piping heat loss. An analytical methodology for integrating radiant heat transfer was implemented with HWSIM. Laboratory test data collected in another project was then used to validate the model for a variety of uninsulated and insulated pipe cases (copper, PEX, and CPVC). Results appear favorable, with typical deviations from lab results less than 8%.

  11. Hot Water Distribution System Model Enhancements

    SciTech Connect

    Hoeschele, M.; Weitzel, E.

    2012-11-01

    This project involves enhancement of the HWSIM distribution system model to more accurately model pipe heat transfer. Recent laboratory testing efforts have indicated that the modeling of radiant heat transfer effects is needed to accurately characterize piping heat loss. An analytical methodology for integrating radiant heat transfer was implemented with HWSIM. Laboratory test data collected in another project was then used to validate the model for a variety of uninsulated and insulated pipe cases (copper, PEX, and CPVC). Results appear favorable, with typical deviations from lab results less than 8%.

  12. Regional TEC model under quiet geomagnetic conditions and low-to-moderate solar activity based on CODE GIMs

    NASA Astrophysics Data System (ADS)

    Feng, Jiandi; Jiang, Weiping; Wang, Zhengtao; Zhao, Zhenzhen; Nie, Linjuan

    2017-08-01

    Global empirical total electron content (TEC) models based on TEC maps effectively describe the average behavior of the ionosphere. However, the accuracy of these global models for a certain region may not be ideal. Due to the number and distribution of the International GNSS Service (IGS) stations, the accuracy of TEC maps is geographically different. The modeling database derived from the global TEC maps with different accuracy is likely one of the main reasons that limits the accuracy of the new models. Moreover, many anomalies in the ionosphere are geographic or geomagnetic dependent, and as such the accuracy of global models can deteriorate if these anomalies are not fully incorporated into the modeling approach. For regional models built in small areas, these influences on modeling are immensely weakened. Thus, the regional TEC models may better reflect the temporal and spatial variations of TEC. In our previous work (Feng et al., 2016), a regional TEC model TECM-NEC is proposed for northeast China. However, this model is only directed against the typical region of Mid-latitude Summer Nighttime Anomaly (MSNA) occurrence, which is meaningless in other regions without MSNA. Following the technique of TECM-NEC model, this study proposes another regional empirical TEC model for other regions in mid-latitudes. Taking a small area BeiJing-TianJin-Tangshan (JJT) region (37.5°-42.5° N, 115°-120° E) in China as an example, a regional empirical TEC model (TECM-JJT) is proposed using the TEC grid data from January 1, 1999 to June 30, 2015 provided by the Center for Orbit Determination in Europe (CODE) under quiet geomagnetic conditions. The TECM-JJT model fits the input CODE TEC data with a bias of 0.11TECU and a root mean square error of 3.26TECU. Result shows that the regional model TECM-JJT is consistent with CODE TEC data and GPS-TEC data.

  13. A Computer Code for the Calculation of NLTE Model Atmospheres Using ALI

    NASA Astrophysics Data System (ADS)

    Kubát, J.

    2003-01-01

    A code for calculation of NLTE model atmospheres in hydrostatic and radiative equilibrium in either spherically symmetric or plane parallel geometry is described. The method of accelerated lambda iteration is used for the treatment of radiative transfer. Other equations (hydrostatic equilibrium, radiative equilibrium, statistical equilibrium, optical depth) are solved using the Newton-Raphson method (linearization). In addition to the standard output of the model atmosphere (dependence of temperature, density, radius, and population numbers on column mass depth) the code enables optional additional outputs for better understanding of processes in the atmosphere. The code is able to calculate model atmospheres of plane-parallel and spherically symmetric semi-infinite atmospheres as well as models of plane parallel and spherical shells. There is also an option for solution of a restricted problem of a NLTE line formation (solution of radiative transfer and statistical equilibrium for a given model atmosphere). The overall scheme of the code is presented.

  14. Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks

    PubMed Central

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-01

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597

  15. Surveying multidisciplinary aspects in real-time distributed coding for Wireless Sensor Networks.

    PubMed

    Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio

    2015-01-27

    Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, "real-time" coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories.

  16. A model for non-monotonic intensity coding

    PubMed Central

    Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V. M.; Yarali, Ayse

    2015-01-01

    Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus–response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666

  17. Pattern-based video coding with dynamic background modeling

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    The existing video coding standard H.264 could not provide expected rate-distortion (RD) performance for macroblocks (MBs) with both moving objects and static background and the MBs with uncovered background (previously occluded). The pattern-based video coding (PVC) technique partially addresses the first problem by separating and encoding moving area and skipping background area at block level using binary pattern templates. However, the existing PVC schemes could not outperform the H.264 with significant margin at high bit rates due to the least number of MBs classified using the pattern mode. Moreover, both H.264 and the PVC scheme could not provide the expected RD performance for the uncovered background areas due to the unavailability of the reference areas in the existing approaches. In this paper, we propose a new PVC technique which will use the most common frame in a scene (McFIS) as a reference frame to overcome the problems. Apart from the use of McFIS as a reference frame, we also introduce a content-dependent pattern generation strategy for better RD performance. The experimental results confirm the superiority of the proposed schemes in comparison with the existing PVC and the McFIS-based methods by achieving significant image quality gain at a wide range of bit rates.

  18. Higher-order ionosphere modeling for CODE's next reprocessing activities

    NASA Astrophysics Data System (ADS)

    Lutz, S.; Schaer, S.; Meindl, M.; Dach, R.; Steigenberger, P.

    2009-12-01

    CODE (the Center for Orbit Determination in Europe) is a joint venture between the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Agency for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Institut für Astronomische und Phsyikalische Geodäsie of the Technische Universität München (IAPG/TUM, Munich, Germany). It acts as one of the global analysis centers of the International GNSS Service (IGS) and participates in the first IGS reprocessing campaign, a full reanalysis of GPS data collected since 1994. For a future reanalyis of the IGS data it is planned to consider not only first-order but also higher-order ionosphere terms in the space geodetic observations. There are several works (e.g. Fritsche et al. 2005), which showed a significant and systematic influence of these effects on the analysis results. The development version of the Bernese Software used at CODE is expanded by the ability to assign additional (scaling) parameters to each considered higher-order ionosphere term. By this, each correction term can be switched on and off on normal-equation level and, moreover, the significance of each correction term may be verified on observation level for different ionosphere conditions.

  19. Programming model for distributed intelligent systems

    NASA Technical Reports Server (NTRS)

    Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.

    1988-01-01

    A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.

  20. Modeling risk in distributed healthcare information systems.

    PubMed

    Maglogiannis, Ilias; Zafiropoulos, Elias

    2006-01-01

    This paper presents a modeling approach for performing a risk analysis study of networked healthcare information systems. The proposed method is based on CRAMM for studying the assets, threats and vulnerabilities of the distributed information system, and models their interrelationships using Bayesian networks. The most critical events are identified and prioritized, based on "what - if" studies of system operation. The proposed risk analysis framework has been applied to a healthcare information network operating in the North Aegean Region in Greece.

  1. Generative models for discovering sparse distributed representations.

    PubMed

    Hinton, G E; Ghahramani, Z

    1997-08-29

    We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.

  2. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  3. Modeling utilization distributions in space and time.

    PubMed

    Keating, Kim A; Cherry, Steve

    2009-07-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r = 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep (Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed.

  4. Dysregulation of REST-regulated coding and non-coding RNAs in a cellular model of Huntington's disease.

    PubMed

    Soldati, Chiara; Bithell, Angela; Johnston, Caroline; Wong, Kee-Yew; Stanton, Lawrence W; Buckley, Noel J

    2013-02-01

    Huntingtin (Htt) protein interacts with many transcriptional regulators, with widespread disruption to the transcriptome in Huntington's disease (HD) brought about by altered interactions with the mutant Htt (muHtt) protein. Repressor Element-1 Silencing Transcription Factor (REST) is a repressor whose association with Htt in the cytoplasm is disrupted in HD, leading to increased nuclear REST and concomitant repression of several neuronal-specific genes, including brain-derived neurotrophic factor (Bdnf). Here, we explored a wide set of HD dysregulated genes to identify direct REST targets whose expression is altered in a cellular model of HD but that can be rescued by knock-down of REST activity. We found many direct REST target genes encoding proteins important for nervous system development, including a cohort involved in synaptic transmission, at least two of which can be rescued at the protein level by REST knock-down. We also identified several microRNAs (miRNAs) whose aberrant repression is directly mediated by REST, including miR-137, which has not previously been shown to be a direct REST target in mouse. These data provide evidence of the contribution of inappropriate REST-mediated transcriptional repression to the widespread changes in coding and non-coding gene expression in a cellular model of HD that may affect normal neuronal function and survival.

  5. Aerosol Behavior Log-Normal Distribution Model.

    SciTech Connect

    GIESEKE, J. A.

    2001-10-22

    HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure, and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.

  6. Code interoperability and standard data formats in quantum chemistry and quantum dynamics: The Q5/D5Cost data model.

    PubMed

    Rossi, Elda; Evangelisti, Stefano; Laganà, Antonio; Monari, Antonio; Rampino, Sergio; Verdicchio, Marco; Baldridge, Kim K; Bendazzoli, Gian Luigi; Borini, Stefano; Cimiraglia, Renzo; Angeli, Celestino; Kallay, Peter; Lüthi, Hans P; Ruud, Kenneth; Sanchez-Marin, José; Scemama, Anthony; Szalay, Peter G; Tajti, Attila

    2014-03-30

    Code interoperability and the search for domain-specific standard data formats represent critical issues in many areas of computational science. The advent of novel computing infrastructures such as computational grids and clouds make these issues even more urgent. The design and implementation of a common data format for quantum chemistry (QC) and quantum dynamics (QD) computer programs is discussed with reference to the research performed in the course of two Collaboration in Science and Technology Actions. The specific data models adopted, Q5Cost and D5Cost, are shown to work for a number of interoperating codes, regardless of the type and amount of information (small or large datasets) to be exchanged. The codes are either interfaced directly, or transfer data by means of wrappers; both types of data exchange are supported by the Q5/D5Cost library. Further, the exchange of data between QC and QD codes is addressed. As a proof of concept, the H + H2 reaction is discussed. The proposed scheme is shown to provide an excellent basis for cooperative code development, even across domain boundaries. Moreover, the scheme presented is found to be useful also as a production tool in the grid distributed computing environment. Copyright © 2013 Wiley Periodicals, Inc.

  7. Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes

    NASA Technical Reports Server (NTRS)

    DeWitt, Kenneth; Garg Vijay; Ameri, Ali

    2005-01-01

    The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.

  8. Joint physical and numerical modeling of water distribution networks.

    SciTech Connect

    Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.; Kajder, Karen C.; Webb, Stephen Walter; Cappelle, Malynda A.; Khalsa, Siri Sahib; Wright, Jerome L.; Sun, Amy Cha-Tien; Chwirka, J. Benjamin; Hartenberger, Joel David; McKenna, Sean Andrew; van Bloemen Waanders, Bart Gustaaf; McGrath, Lucas K.; Ho, Clifford Kuofei

    2009-01-01

    This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in some cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.

  9. Emergence of Visual Saliency from Natural Scenes via Context-Mediated Probability Distributions Coding

    PubMed Central

    Xu, Jinhua; Yang, Zhiyong; Tsien, Joe Z.

    2010-01-01

    Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs) of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA) and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes. PMID:21209963

  10. An Advanced simulation Code for Modeling Inductive Output Tubes

    SciTech Connect

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.

  11. Modeling of Ionization Physics with the PIC Code OSIRIS

    SciTech Connect

    Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC

    2005-09-27

    When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.

  12. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.

  13. Field-based tests of geochemical modeling codes using New Zealand hydrothermal systems

    SciTech Connect

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.

  14. Documentation for grants equal to tax model: Volume 3, Source code

    SciTech Connect

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations.

  15. Modeling Mosquito Distribution. Impact of the Landscape

    NASA Astrophysics Data System (ADS)

    Dumont, Y.

    2011-09-01

    In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide mosquito density estimate and mosquito distribution, taking into account the environment and entomological knowledges. Mosquito dispersal modeling, together with a compartmental approach, leads to a quasilinear parabolic system. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering various landscapes, we show that the environment can have a strong influence on mosquito distribution and, thus, in the efficiency or not of vector control.

  16. Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2

    NASA Technical Reports Server (NTRS)

    Daley, P. L.; Owens, S. F.

    1988-01-01

    A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.

  17. CURRENT - A Computer Code for Modeling Two-Dimensional, Chemically Reaccting, Low Mach Number Flows

    SciTech Connect

    Winters, W.S.; Evans, G.H.; Moen, C.D.

    1996-10-01

    This report documents CURRENT, a computer code for modeling two- dimensional, chemically reacting, low Mach number flows including the effects of surface chemistry. CURRENT is a finite volume code based on the SIMPLER algorithm. Additional convergence acceleration for low Peclet number flows is provided using improved boundary condition coupling and preconditioned gradient methods. Gas-phase and surface chemistry is modeled using the CHEMKIN software libraries. The CURRENT user-interface has been designed to be compatible with the Sandia-developed mesh generator and post processor ANTIPASTO and the post processor TECPLOT. This report describes the theory behind the code and also serves as a user`s manual.

  18. GAMMA: a code for the analysis of component failure rates with a compound Poisson-gamma model. Final technical report

    SciTech Connect

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    The theory is summarized for the homogeneous Poisson and the compound gamma-Poisson probability models which can be used to analyze failure rate attribute data consisting of the number of failures in specified test times for normally operating components or systems. A computer code based on this theory is described, and instructions for its use together with a sample problem and a complete code listing are presented. For the compound model, used in a Bayesian analysis of failure rate data, values of the parameters for the prior gamma distribution, chosen a priori, are estimated from observed failure data by three methods: (1) matching the data moments to those of the prior distribution, (2) matching the data moments to those of the marginal distribution, and (3) the marginal maximum likelihood method. Many program options are available including variance estimates for the prior parameter estimators, posteriori analyses for each component, various statistical comparisons between the homogeneous and compound models, and generalized chi-square and Kolmogorov-Smirnov goodness-of-fit tests for determining how well the failure models describe the observed data.

  19. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  20. Lumpy - an interactive Lumped Parameter Modeling code based on MS Access and MS Excel.

    NASA Astrophysics Data System (ADS)

    Suckow, A.

    2012-04-01

    Several tracers for dating groundwater (18O/2H, 3H, CFCs, SF6, 85Kr) need lumped parameter modeling (LPM) to convert measured values into numbers with unit time. Other tracers (T/3He, 39Ar, 14C, 81Kr) allow the computation of apparent ages with a mathematical formula using radioactive decay without defining the age mixture that any groundwater sample represents. Also interpretation of the latter profits significantly from LPM tools that allow forward modeling of input time series to measurable output values assuming different age distributions and mixtures in the sample. This talk presents a Lumped Parameter Modeling code, Lumpy, combining up to two LPMs in parallel. The code is standalone and freeware. It is based on MS Access and Access Basic (AB) and allows using any number of measurements for both input time series and output measurements, with any, not necessarily constant, time resolution. Several tracers, also comprising very different timescales like e.g. the combination of 18O, CFCs and 14C, can be modeled, displayed and fitted simultaneously. Lumpy allows for each of the two parallel models the choice of the following age distributions: Exponential Piston flow Model (EPM), Linear Piston flow Model (LPM), Dispersion Model (DM), Piston flow Model (PM) and Gamma Model (GM). Concerning input functions, Lumpy allows delaying (passage through the unsaturated zone) shifting by a constant value (converting 18O data from a GNIP station to a different altitude), multiplying by a constant value (geochemical reduction of initial 14C) and the definition of a constant input value prior to the input time series (pre-bomb tritium). Lumpy also allows underground tracer production (4He or 39Ar) and the computation of a daughter product (tritiugenic 3He) as well as partial loss of the daughter product (partial re-equilibration of 3He). These additional parameters and the input functions can be defined independently for the two sub-LPMs to represent two different recharge

  1. Numerical modelling of spallation in 2D hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Maw, J. R.; Giles, A. R.

    1996-05-01

    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  2. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    SciTech Connect

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.

  3. Electrical Circuit Simulation Code

    SciTech Connect

    Wix, Steven D.; Waters, Arlon J.; Shirley, David

    2001-08-09

    Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.

  4. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  5. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    SciTech Connect

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  6. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1992-01-01

    The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.

  7. Validating kinetic models in a fluid code using data from high-Knudsen-number capsule implosions

    NASA Astrophysics Data System (ADS)

    Hoffman, N.; Molvig, K.; Dodd, E.; Albright, B.; Simakov, A.; Zimmerman, G.; Rosenberg, M.; Rinderknecht, H.; Sio, H.; Zylstra, A.; Sinenian, N.; Gatu Johnson, M.; Seguin, F.; Frenje, J.; Li, C. K.; Petrasso, R.; Glebov, V.; Stoeckl, C.; Seka, W.; Sangster, C.

    2013-10-01

    We validate models of (a) ion diffusion and (b) fusion reactivity decrease from modified ion-distribution tails, implemented in a rad-hydro code, using data for five quantities (DD-n yield, D3He-p yield, DD burn temperature, bang time, and absorbed energy) from recent thin-shell D3He-filled capsules at OMEGA. Four inputs (laser source fraction, electron thermal flux limiter, Knudsen number multiplier, and ion flux multiplier) are varied to find the best fit to the ten observables from two implosions (8-atm fill and 23-atm fill). The calibrated input values can explain the data from a set of other D3He implosions with fill pressures from 1 atm to 17 atm (Knudsen numbers from 0.5 to ~6). Using a new transport model for ion loss, we will develop a model of wide validity for OMEGA direct-drive implosions. Funded by USDOE under contract DE-AC52-06NA25396.

  8. Hypervelocity Impact Test Fragment Modeling: Modifications to the Fragment Rotation Analysis and Lightcurve Code

    NASA Technical Reports Server (NTRS)

    Gouge, Michael F.

    2011-01-01

    Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.

  9. Distributed earth model/orbiter simulation

    NASA Technical Reports Server (NTRS)

    Geisler, Erik; Mcclanahan, Scott; Smith, Gary

    1989-01-01

    Distributed Earth Model/Orbiter Simulation (DEMOS) is a network based application developed for the UNIX environment that visually monitors or simulates the Earth and any number of orbiting vehicles. Its purpose is to provide Mission Control Center (MCC) flight controllers with a visually accurate three dimensional (3D) model of the Earth, Sun, Moon and orbiters, driven by real time or simulated data. The project incorporates a graphical user interface, 3D modelling employing state-of-the art hardware, and simulation of orbital mechanics in a networked/distributed environment. The user interface is based on the X Window System and the X Ray toolbox. The 3D modelling utilizes the Programmer's Hierarchical Interactive Graphics System (PHIGS) standard and Raster Technologies hardware for rendering/display performance. The simulation of orbiting vehicles uses two methods of vector propagation implemented with standard UNIX/C for portability. Each part is a distinct process that can run on separate nodes of a network, exploiting each node's unique hardware capabilities. The client/server communication architecture of the application can be reused for a variety of distributed applications.

  10. A predictive coding account of bistable perception - a model-based fMRI study.

    PubMed

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  11. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  12. LWR codes capability to address SFR BDBA scenarios: Modeling of the ABCOVE tests

    SciTech Connect

    Herranz, L. E.; Garcia, M.; Morandi, S.

    2012-07-01

    The sound background built-up in LWR source term analysis in case of a severe accident, make it worth to check the capability of LWR safety analysis codes to model accident SFR scenarios, at least in some areas. This paper gives a snapshot of such predictability in the area of aerosol behavior in containment. To do so, the AB-5 test of the ABCOVE program has been modeled with 3 LWR codes: ASTEC, ECART and MELCOR. Through the search of a best estimate scenario and its comparison to data, it is concluded that even in the specific case of in-containment aerosol behavior, some enhancements would be needed in the LWR codes and/or their application, particularly with respect to consideration of particle shape. Nonetheless, much of the modeling presently embodied in LWR codes might be applicable to SFR scenarios. These conclusions should be seen as preliminary as long as comparisons are not extended to more experimental scenarios. (authors)

  13. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  14. Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species

    EPA Science Inventory

    Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...

  15. Parallel Spectral Transform Shallow Water Model: A runtime-tunable parallel benchmark code

    SciTech Connect

    Worley, P.H.; Foster, I.T.

    1994-05-01

    Fairness is an important issue when benchmarking parallel computers using application codes. The best parallel algorithm on one platform may not be the best on another. While it is not feasible to reevaluate parallel algorithms and reimplement large codes whenever new machines become available, it is possible to embed algorithmic options into codes that allow them to be ``tuned`` for a paticular machine without requiring code modifications. In this paper, we describe a code in which such an approach was taken. PSTSWM was developed for evaluating parallel algorithms for the spectral transform method in atmospheric circulation models. Many levels of runtime-selectable algorithmic options are supported. We discuss these options and our evaluation methodology. We also provide empirical results from a number of parallel machines, indicating the importance of tuning for each platform before making a comparison.

  16. Improved carbon migration modelling with the ERO code

    NASA Astrophysics Data System (ADS)

    Van Hoey, Olivier; Kirschner, Andreas; Björkas, Carolina; Borodin, Dmitry; Matveev, Dmitry; Uytdenhouwen, Inge; Van Oost, Guido

    2013-07-01

    Material migration is a crucial issue in thermonuclear fusion devices. To study carbon migration, 13CH4 has been injected through a polished graphite roof-like test limiter in the TEXTOR scrape-off layer. The interpretation of the experimental 13C deposition patterns on the roof limiter surface has been done with the ERO impurity transport code. To reproduce the very low experimental 13C deposition efficiencies with ERO, an enhanced re-erosion mechanism for re-deposited carbon had to be assumed in previous studies. However, erosion by hydrogenic species produced during dissociation of injected 13CH4 was not taken into account by ERO in these studies. This additional erosion could maybe explain the very low experimental 13C deposition efficiencies. Therefore, it is now taken into account in ERO. Also more realistic physical sputtering yields and hydrocarbon reflection probabilities have been implemented in ERO. The simulations with these improvements included clearly confirm the need for enhanced re-erosion of re-deposited carbon.

  17. Recommended requirements to code officials for solar heating, cooling, and hot water systems. Model document for code officials on solar heating and cooling of buildings

    SciTech Connect

    1980-06-01

    These recommended requirements include provisions for electrical, building, mechanical, and plumbing installations for active and passive solar energy systems used for space or process heating and cooling, and domestic water heating. The provisions in these recommended requirements are intended to be used in conjunction with the existing building codes in each jurisdiction. Where a solar relevant provision is adequately covered in an existing model code, the section is referenced in the Appendix. Where a provision has been drafted because there is no counterpart in the existing model code, it is found in the body of these recommended requirements. Commentaries are included in the text explaining the coverage and intent of present model code requirements and suggesting alternatives that may, at the discretion of the building official, be considered as providing reasonable protection to the public health and safety. Also included is an Appendix which is divided into a model code cross reference section and a reference standards section. The model code cross references are a compilation of the sections in the text and their equivalent requirements in the applicable model codes. (MHR)

  18. Void fraction distribution in a boiling water reactor fuel assembly and the evaluation of subchannel analysis codes

    SciTech Connect

    Inoue, Akira; Futakuchi, Masanobu; Yagi, Makoto; Mitsutake, Toru; Morooka, Shinichi

    1995-12-01

    Void fraction measurement tests for boiling water reactor (BWR) simulated nuclear fuel assemblies have been conducted using an X-ray computed tomography scanner.there are two types of fuel assemblies concerning water rods. One fuel assembly has two water rods; the other has one large water rod. The effects of the water rods on radial void fraction distributions are measured within the fuel assemblies. The results show that the water rod effect does not make a large difference in void fraction distribution. The subchannel analysis codes COBRA/BWR and THERMIT-2 were compared with subchannel-averaged void fractions. The prediction accuracy of COBRA/BWR and THERMIT-2 for the subchannel-averaged void fraction was {Delta}{alpha} = {minus}3.6%, {sigma} = 4.8% and {Delta}{alpha} = {minus}4.1%, {sigma} = 4.5%, respectively, where {Delta}{alpha} is the average of the difference measured and calculated values. The subchannel analysis codes are highly applicable for the prediction of a two-phase flow distribution within BWR fuel assemblies.

  19. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    SciTech Connect

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC.

  20. A computer code for calculations in the algebraic collective model of the atomic nucleus

    NASA Astrophysics Data System (ADS)

    Welsh, T. A.; Rowe, D. J.

    2016-03-01

    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.