NASA Astrophysics Data System (ADS)
Li, Li; Hu, Xiao; Zeng, Rui
2007-11-01
The development of practical distributed video coding schemes is based on the consequence of information-theoretic bounds established in the 1970s by Slepian and Wolf for distributed lossless coding, and by Wyner and Ziv for lossy coding with decoder side information. In distributed video compression application, it is hard to accurately describe the non-stationary behavior of the virtual correlation channel between X and side information Y although it plays a very important role in overall system performance. In this paper, we implement a practical Slepian-Wolf asymmetric distributed video compression system using irregular LDPC codes. Moreover, based on exploiting the dependencies of previously decode bit planes from video frame X and side information Y, we present improvement schemes to divide different reliable regions. Our simulation results show improving schemes of exploiting the dependencies between previously decoded bit planes can get better overall encoding rate performance as BER approach zero. We also show, compared with BSC model, BC channel model is more suitable for distributed video compression scenario because of the non-stationary properties of the virtual correlation channel and adaptive detecting channel model parameters from previously adjacent decoded bit planes can provide more accurately initial belief messages from channel at LDPC decoder.
Modelling dose distribution in tubing and cable using CYLTRAN and ACCEPT Monte Carlo simulation code
Weiss, D.E.; Kensek, R.P.
1993-12-31
One of the difficulties in the irradiation of non-slab geometries, such as a tube, is the uneven penetration of the electrons. A simple model of the distribution of dose in a tube or cable in relationship to voltage, composition, wall thickness and diameter can be mapped using the cylinder geometry provided for in the ITS/CYLTRAN code, complete with automatic subzoning. The reality of more complex 3D geometry to include effects of window foil, backscattering fixtures and beam scanning angles can be more completely accounted for by using the ITS/ACCEPT code with a line source update and a system of intersecting wedges to define input zones for mapping dose distributions in a tube. Thus, all of the variables that affect dose distribution can be modelled without the need to run time consuming and costly factory experiments. The effects of composition changes on dose distribution can also be anticipated.
Trent, D.S.; Eyler, L.L.
1982-09-01
In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
UNIX code management and distribution
Hung, T.; Kunz, P.F.
1992-09-01
We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
3-D model-based frame interpolation for distributed video coding of static scenes.
Maitre, Matthieu; Guillemot, Christine; Morin, Luce
2007-05-01
This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content. PMID:17491456
Distribution Coding in the Visual Pathway
Sanderson, A. C.; Kozak, W. M.; Calvert, T. W.
1973-01-01
Although a variety of types of spike interval histograms have been reported, little attention has been given to the spike interval distribution as a neural code and to how different distributions are transmitted through neural networks. In this paper we present experimental results showing spike interval histograms recorded from retinal ganglion cells of the cat. These results exhibit a clear correlation between spike interval distribution and stimulus condition at the retinal ganglion cell level. The averaged mean rates of the cells studied were nearly the same in light as in darkness whereas the spike interval histograms were much more regular in light than in darkness. We present theoretical models which illustrate how such a distribution coding at the retinal level could be “interpreted” or recorded at some higher level of the nervous system such as the lateral geniculate nucleus. Interpretation is an essential requirement of a neural code which has often been overlooked in modeling studies. Analytical expressions are derived describing the role of distribution coding in determining the transfer characteristics of a simple interaction model and of a lateral inhibition network. Our work suggests that distribution coding might be interpreted by simply interconnected neural networks such as relay cell networks, in general, and the primary thalamic sensory nuclei in particular. PMID:4697235
Distributed transform coding via source-splitting
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2012-12-01
Transform coding (TC) is one of the best known practical methods for quantizing high-dimensional vectors. In this article, a practical approach to distributed TC of jointly Gaussian vectors is presented. This approach, referred to as source-split distributed transform coding (SP-DTC), can be used to easily implement two terminal transform codes for any given rate-pair. The main idea is to apply source-splitting using orthogonal-transforms, so that only Wyner-Ziv (WZ) quantizers are required for compression of transform coefficients. This approach however requires optimizing the bit allocation among dependent sets of WZ quantizers. In order to solve this problem, a low-complexity tree-search algorithm based on analytical models for transform coefficient quantization is developed. A rate-distortion (RD) analysis of SP-DTCs for jointly Gaussian sources is presented, which indicates that these codes can significantly outperform the practical alternative of independent TC of each source, whenever there is a strong correlation between the sources. For practical implementation of SP-DTCs, the idea of using conditional entropy constrained (CEC) quantizers followed by Slepian-Wolf coding is explored. Experimental results obtained with SP-DTC designs based on both CEC scalar quantizers and CEC trellis-coded quantizers demonstrate that actual implementations of SP-DTCs can achieve RD performance close to the analytically predicted limits.
Baele, Guy; Van de Peer, Yves; Vansteelandt, Stijn
2009-01-01
Background Many recent studies that relax the assumption of independent evolution of sites have done so at the expense of a drastic increase in the number of substitution parameters. While additional parameters cannot be avoided to model context-dependent evolution, a large increase in model dimensionality is only justified when accompanied with careful model-building strategies that guard against overfitting. An increased dimensionality leads to increases in numerical computations of the models, increased convergence times in Bayesian Markov chain Monte Carlo algorithms and even more tedious Bayes Factor calculations. Results We have developed two model-search algorithms which reduce the number of Bayes Factor calculations by clustering posterior densities to decide on the equality of substitution behavior in different contexts. The selected model's fit is evaluated using a Bayes Factor, which we calculate via model-switch thermodynamic integration. To reduce computation time and to increase the precision of this integration, we propose to split the calculations over different computers and to appropriately calibrate the individual runs. Using the proposed strategies, we find, in a dataset of primate Ancestral Repeats, that careful modeling of context-dependent evolution may increase model fit considerably and that the combination of a context-dependent model with the assumption of varying rates across sites offers even larger improvements in terms of model fit. Using a smaller nuclear SSU rRNA dataset, we show that context-dependence may only become detectable upon applying model-building strategies. Conclusion While context-dependent evolutionary models can increase the model fit over traditional independent evolutionary models, such complex models will often contain too many parameters. Justification for the added parameters is thus required so that only those parameters that model evolutionary processes previously unaccounted for are added to the evolutionary
NASA Technical Reports Server (NTRS)
Luchini, Chris B.
1997-01-01
Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.
A distributed particle simulation code in C++
Forslund, D.W.; Wingate, C.A.; Ford, P.S.; Junkins, J.S.; Pope, S.C.
1992-01-01
Although C++ has been successfully used in a variety of computer science applications, it has just recently begun to be used in scientific applications. We have found that the object-oriented properties of C++ lend themselves well to scientific computations by making maintenance of the code easier, by making the code easier to understand, and by providing a better paradigm for distributed memory parallel codes. We describe here aspects of developing a particle plasma simulation code using object-oriented techniques for use in a distributed computing environment. We initially designed and implemented the code for serial computation and then used the distributed programming toolkit ISIS to run it in parallel. In this connection we describe some of the difficulties presented by using C++ for doing parallel and scientific computation.
NASA Astrophysics Data System (ADS)
Pietrzak, Robert; Konefał, Adam; Sokół, Maria; Orlef, Andrzej
2016-08-01
The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method.
Cheetah: Starspot modeling code
NASA Astrophysics Data System (ADS)
Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam
2014-12-01
Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.
Time coded distribution via broadcasting stations
NASA Technical Reports Server (NTRS)
Leschiutta, S.; Pettiti, V.; Detoma, E.
1979-01-01
The distribution of standard time signals via AM and FM broadcasting stations presents the distinct advantages to offer a wide area coverage and to allow the use of inexpensive receivers, but the signals are radiated a limited number of times per day, are not usually available during the night, and no full and automatic synchronization of a remote clock is possible. As an attempt to overcome some of these problems, a time coded signal with a complete date information is diffused by the IEN via the national broadcasting networks in Italy. These signals are radiated by some 120 AM and about 3000 FM and TV transmitters around the country. In such a way, a time ordered system with an accuracy of a couple of milliseconds is easily achieved.
The weight distribution and randomness of linear codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1989-01-01
Finding the weight distributions of block codes is a problem of theoretical and practical interest. Yet the weight distributions of most block codes are still unknown except for a few classes of block codes. Here, by using the inclusion and exclusion principle, an explicit formula is derived which enumerates the complete weight distribution of an (n,k,d) linear code using a partially known weight distribution. This expression is analogous to the Pless power-moment identities - a system of equations relating the weight distribution of a linear code to the weight distribution of its dual code. Also, an approximate formula for the weight distribution of most linear (n,k,d) codes is derived. It is shown that for a given linear (n,k,d) code over GF(q), the ratio of the number of codewords of weight u to the number of words of weight u approaches the constant Q = q(-)(n-k) as u becomes large. A relationship between the randomness of a linear block code and the minimum distance of its dual code is given, and it is shown that most linear block codes with rigid algebraic and combinatorial structure also display certain random properties which make them similar to random codes with no structure at all.
Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets
NASA Technical Reports Server (NTRS)
Cheung, K-M.; Smyth, P.
1993-01-01
Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.
Sparsey™: event recognition via deep hierarchical sparse distributed codes
Rinkus, Gerard J.
2014-01-01
The visual cortex's hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally) and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes) in each representational field (which we equate with the cortical macrocolumn, “mac”), at each level. In localism, each represented feature/concept/event (hereinafter “item”) is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC) in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac's units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model's core algorithm, which does both storage and retrieval (inference), makes a single pass over all macs on each time step, the overall model's storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (“Big Data”) problems. A 2010 paper described a nonhierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level), describing novel model principles like progressive critical periods, dynamic modulation of principal cells' activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of spatiotemporal
Optimal source codes for geometrically distributed integer alphabets
NASA Technical Reports Server (NTRS)
Gallager, R. G.; Van Voorhis, D. C.
1975-01-01
An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.
Energy efficient wireless sensor networks using asymmetric distributed source coding
NASA Astrophysics Data System (ADS)
Rao, Abhishek; Kulkarni, Murlidhar
2013-01-01
Wireless Sensor Networks (WSNs) are networks of sensor nodes deployed over a geographical area to perform a specific task. WSNs pose many design challenges. Energy conservation is one such design issue. In literature a wide range of solutions addressing this issue have been proposed. Generally WSNs are densely deployed. Thus the nodes with the close proximity are more likely to have the same data. Transmission of such non-aggregated data may lead to an inefficient energy management. Hence the data fusion has to be performed at the nodes so as to combine the edundant information into a single data unit. Distributed Source Coding is an efficient approach in achieving this task. In this paper an attempt has been made in modeling such a system. Various energy efficient codes were considered for the analysis. System performance in terms of energy efficiency has been made.
Mumot, Marta; Agapov, Alexey
2007-11-26
We have developed a new delivering system for hadron therapy which uses a multileaf collimator and a range shifter. We simulate our delivering beam system with the multi-particle transport code 'Fluka'. From these simulations we obtained information about the dose distributions, about stars generated in the delivering system elements and also information about the neutron flux. All the informations obtained were analyzed from the point of view of radiation protection, homogeneity of beam delivery to patient body, and also in order to improve some modifiers used.
Dynamic alignment models for neural coding.
Kollmorgen, Sepp; Hahnloser, Richard H R
2014-03-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448
Hydronic distribution system computer model
Andrews, J.W.; Strasser, J.J.
1994-10-01
A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.
Binary weight distributions of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1992-01-01
The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.
Codon Distribution in Error-Detecting Circular Codes
Fimmel, Elena; Strüngmann, Lutz
2016-01-01
In 1957, Francis Crick et al. suggested an ingenious explanation for the process of frame maintenance. The idea was based on the notion of comma-free codes. Although Crick’s hypothesis proved to be wrong, in 1996, Arquès and Michel discovered the existence of a weaker version of such codes in eukaryote and prokaryote genomes, namely the so-called circular codes. Since then, circular code theory has invariably evoked great interest and made significant progress. In this article, the codon distributions in maximal comma-free, maximal self-complementary C3 and maximal self-complementary circular codes are discussed, i.e., we investigate in how many of such codes a given codon participates. As the main (and surprising) result, it is shown that the codons can be separated into very few classes (three, or five, or six) with respect to their frequency. Moreover, the distribution classes can be hierarchically ordered as refinements from maximal comma-free codes via maximal self-complementary C3 codes to maximal self-complementary circular codes. PMID:26999215
MEMOPS: data modelling and automatic code generation.
Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D
2010-01-01
In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology. PMID:20375445
Material model library for explicit numerical codes
Hofmann, R.; Dial, B.W.
1982-08-01
A material model logic structure has been developed which is useful for most explicit finite-difference and explicit finite-element Lagrange computer codes. This structure has been implemented and tested in the STEALTH codes to provide an example for researchers who wish to implement it in generically similar codes. In parallel with these models, material parameter libraries have been created for the implemented models for materials which are often needed in DoD applications.
Dynamic algorithm for correlation noise estimation in distributed video coding
NASA Astrophysics Data System (ADS)
Thambu, Kuganeswaran; Fernando, Xavier; Guan, Ling
2010-01-01
Low complexity encoders at the expense of high complexity decoders are advantageous in wireless video sensor networks. Distributed video coding (DVC) achieves the above complexity balance, where the receivers compute Side information (SI) by interpolating the key frames. Side information is modeled as a noisy version of input video frame. In practise, correlation noise estimation at the receiver is a complex problem, and currently the noise is estimated based on a residual variance between pixels of the key frames. Then the estimated (fixed) variance is used to calculate the bit-metric values. In this paper, we have introduced the new variance estimation technique that rely on the bit pattern of each pixel, and it is dynamically calculated over the entire motion environment which helps to calculate the soft-value information required by the decoder. Our result shows that the proposed bit based dynamic variance estimation significantly improves the peak signal to noise ratio (PSNR) performance.
Censored Distributed Space-Time Coding for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Yiu, S.; Schober, R.
2007-12-01
We consider the application of distributed space-time coding in wireless sensor networks (WSNs). In particular, sensors use a common noncoherent distributed space-time block code (DSTBC) to forward their local decisions to the fusion center (FC) which makes the final decision. We show that the performance of distributed space-time coding is negatively affected by erroneous sensor decisions caused by observation noise. To overcome this problem of error propagation, we introduce censored distributed space-time coding where only reliable decisions are forwarded to the FC. The optimum noncoherent maximum-likelihood and a low-complexity, suboptimum generalized likelihood ratio test (GLRT) FC decision rules are derived and the performance of the GLRT decision rule is analyzed. Based on this performance analysis we derive a gradient algorithm for optimization of the local decision/censoring threshold. Numerical and simulation results show the effectiveness of the proposed censoring scheme making distributed space-time coding a prime candidate for signaling in WSNs.
Evaluation of help model replacement codes
Whiteside, Tad; Hang, Thong; Flach, Gregory
2009-07-01
This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.
TEMPEST code simulations of hydrogen distribution in reactor containment structures. Final report
Trent, D.S.; Eyler, L.L.
1985-03-01
The mass transport version of the TEMPEST computer code was used to simulate hydrogen distribution in geometric configurations relevant to reactor containment structures. Predicted results of Battelle-Frankfurt hydrogen distribution tests 1 to 6, and 12 are presented. Agreement between predictions and experimental data is good. Best agreement is obtained using the k-epsilon turbulence model in TEMPEST in flow cases where turbulent diffusion and stable stratification are dominant mechanisms affecting transport. The code's general analysis capabilities are summarized.
Error resiliency of distributed video coding in wireless video communication
NASA Astrophysics Data System (ADS)
Ye, Shuiming; Ouaret, Mourad; Dufaux, Frederic; Ansorge, Michael; Ebrahimi, Touradj
2008-08-01
Distributed Video Coding (DVC) is a new paradigm in video coding, based on the Slepian-Wolf and Wyner-Ziv theorems. DVC offers a number of potential advantages: flexible partitioning of the complexity between the encoder and decoder, robustness to channel errors due to intrinsic joint source-channel coding, codec independent scalability, and multi-view coding without communications between the cameras. In this paper, we evaluate the performance of DVC in an error-prone wireless communication environment. We also present a hybrid spatial and temporal error concealment approach for DVC. Finally, we perform a comparison with a state-of-the-art AVC/H.264 video coding scheme in the presence of transmission errors.
Distributed Turbo Product Codes with Multiple Vertical Parities
NASA Astrophysics Data System (ADS)
Obiedat, Esam A.; Chen, Guotai; Cao, Lei
2009-12-01
We propose a Multiple Vertical Parities Distributed Turbo Product Code (MVP-DTPC) over cooperative network using block Bose Chaudhuri Hochquenghem (BCH) codes as component codes. The source broadcasts extended BCH coded frames to the destination and nearby relays. After decoding the received sequences, each relay constructs a product code by arranging the corrected bit sequences in rows and re-encoding them vertically using BCH as component codes to obtain an Incremental Redundancy (IR) for source's data. To obtain independent vertical parities from each relay in the same code space, we propose a new Circular Interleaver for source's data; different circular interleavers are used to interleave BCH rows before re-encoding vertically. The Maximum A posteriori Probability (MAP) decoding is achieved by applying maximum transfer of extrinsic information between the multiple decoding stages. This is employed in the modified turbo product decoder, which is proposed to cope with multiple parities. The a posteriori output from a vertical decoding stage is used to derive the soft extrinsic information, that are used as a priori input for the next horizontal decoding stage. Simulation results in Additive White Gaussian Noise (AWGN) channel using network scenarios show 0.3-0.5 dB gain improvement in Bit Error Rate (BER) performance over the non-cooperative Turbo Product Codes (TPC).
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
Distributed Inference in Tree Networks Using Coding Theory
NASA Astrophysics Data System (ADS)
Kailkhura, Bhavya; Vempaty, Aditya; Varshney, Pramod K.
2015-07-01
In this paper, we consider the problem of distributed inference in tree based networks. In the framework considered in this paper, distributed nodes make a 1-bit local decision regarding a phenomenon before sending it to the fusion center (FC) via intermediate nodes. We propose the use of coding theory based techniques to solve this distributed inference problem in such structures. Data is progressively compressed as it moves towards the FC. The FC makes the global inference after receiving data from intermediate nodes. Data fusion at nodes as well as at the FC is implemented via error correcting codes. In this context, we analyze the performance for a given code matrix and also design the optimal code matrices at every level of the tree. We address the problems of distributed classification and distributed estimation separately and develop schemes to perform these tasks in tree networks. The proposed schemes are of practical significance due to their simple structure. We study the asymptotic inference performance of our schemes for two different classes of tree networks: fixed height tree networks, and fixed degree tree networks. We show that the proposed schemes are asymptotically optimal under certain conditions.
Non-coding RNAs and complex distributed genetic networks
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2011-08-01
In eukaryotic cells, the mRNA-protein interplay can be dramatically influenced by non-coding RNAs (ncRNAs). Although this new paradigm is now widely accepted, an understanding of the effect of ncRNAs on complex genetic networks is lacking. To clarify what may happen in this case, we propose a mean-field kinetic model describing the influence of ncRNA on a complex genetic network with a distributed architecture including mutual protein-mediated regulation of many genes transcribed into mRNAs. ncRNA is considered to associate with mRNAs and inhibit their translation and/or facilitate degradation. Our results are indicative of the richness of the kinetics under consideration. The main complex features are found to be bistability and oscillations. One could expect to find kinetic chaos as well. The latter feature has however not been observed in our calculations. In addition, we illustrate the difference in the regulation of distributed networks by mRNA and ncRNA.
Distributed generation systems model
Barklund, C.R.
1994-12-31
A slide presentation is given on a distributed generation systems model developed at the Idaho National Engineering Laboratory, and its application to a situation within the Idaho Power Company`s service territory. The objectives of the work were to develop a screening model for distributed generation alternatives, to develop a better understanding of distributed generation as a utility resource, and to further INEL`s understanding of utility concerns in implementing technological change.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Born, U.
1970-01-01
A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Distributed wavefront coding for wide angle imaging system
NASA Astrophysics Data System (ADS)
Larivière-Bastien, Martin; Zhang, Hu; Thibault, Simon
2011-10-01
The emerging paradigm of imaging systems, known as wavefront coding, which employs joint optimization of both the optical system and the digital post-processing system, has not only increased the degrees of design freedom but also brought several significant system-level benefits. The effectiveness of wavefront coding has been demonstrated by several proof-of-concept systems in the reduction of focus-related aberrations and extension of depth of focus. While previous research on wavefront coding was mainly targeted at imaging systems having a small or modest field of view (FOV), we present a preliminary study on wavefront coding applied to panoramic optical systems. Unlike traditional wavefront coding systems, which only require the constancy of the modulation transfer function (MTF) over an extended focus range, wavefront-coded panoramic systems particularly emphasize the mitigation of significant off-axis aberrations such as field curvature, coma, and astigmatism. The restrictions of using a traditional generalized cubic polynomial pupil phase mask for wide angle systems are studied in this paper. It is shown that a traditional approach can be used when the variation of the off-axis aberrations remains modest. Consequently, we propose to study how a distributed wavefront coding approach, where two surfaces are used for encoding the wavefront, can be applied to wide angle lenses. A few cases designed using Zemax are presented and discussed
COLD-SAT Dynamic Model Computer Code
NASA Technical Reports Server (NTRS)
Bollenbacher, G.; Adams, N. S.
1995-01-01
COLD-SAT Dynamic Model (CSDM) computer code implements six-degree-of-freedom, rigid-body mathematical model for simulation of spacecraft in orbit around Earth. Investigates flow dynamics and thermodynamics of subcritical cryogenic fluids in microgravity. Consists of three parts: translation model, rotation model, and slosh model. Written in FORTRAN 77.
Two-dimensional MHD generator model. [GEN code
Geyer, H. K.; Ahluwalia, R. K.; Doss, E. D.
1980-09-01
A steady state, two-dimensional MHD generator code, GEN, is presented. The code solves the equations of conservation of mass, momentum, and energy, using a Von Mises transformation and a local linearization of the equations. By splitting the source terms into a part proportional to the axial pressure gradient and a part independent of the gradient, the pressure distribution along the channel is easily obtained to satisfy various criteria. Thus, the code can run effectively in both design modes, where the channel geometry is determined, and analysis modes, where the geometry is previously known. The code also employs a mixing length concept for turbulent flows, Cebeci and Chang's wall roughness model, and an extension of that model to the effective thermal diffusities. Results on code validation, as well as comparisons of skin friction and Stanton number calculations with experimental results, are presented.
Genetic coding and gene expression - new Quadruplet genetic coding model
NASA Astrophysics Data System (ADS)
Shankar Singh, Rama
2012-07-01
Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.
Transmutation Fuel Performance Code Thermal Model Verification
Gregory K. Miller; Pavel G. Medvedev
2007-09-01
FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.
Context-based lossless image compression with optimal codes for discretized Laplacian distributions
NASA Astrophysics Data System (ADS)
Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin
2003-05-01
Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.
Generation of Java code from Alvis model
NASA Astrophysics Data System (ADS)
Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał
2015-12-01
Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Vaginal drug distribution modeling.
Katz, David F; Yuan, Andrew; Gao, Yajing
2015-09-15
This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. PMID:25933938
Complex phylogenetic distribution of a non-canonical genetic code in green algae
2010-01-01
Background A non-canonical nuclear genetic code, in which TAG and TAA have been reassigned from stop codons to glutamine, has evolved independently in several eukaryotic lineages, including the ulvophycean green algal orders Dasycladales and Cladophorales. To study the phylogenetic distribution of the standard and non-canonical genetic codes, we generated sequence data of a representative set of ulvophycean green algae and used a robust green algal phylogeny to evaluate different evolutionary scenarios that may account for the origin of the non-canonical code. Results This study demonstrates that the Dasycladales and Cladophorales share this alternative genetic code with the related order Trentepohliales and the genus Blastophysa, but not with the Bryopsidales, which is sister to the Dasycladales. This complex phylogenetic distribution whereby all but one representative of a single natural lineage possesses an identical deviant genetic code is unique. Conclusions We compare different evolutionary scenarios for the complex phylogenetic distribution of this non-canonical genetic code. A single transition to the non-canonical code followed by a reversal to the canonical code in the Bryopsidales is highly improbable due to the profound genetic changes that coincide with codon reassignment. Multiple independent gains of the non-canonical code, as hypothesized for ciliates, are also unlikely because the same deviant code has evolved in all lineages. Instead we favor a stepwise acquisition model, congruent with the ambiguous intermediate model, whereby the non-canonical code observed in these green algal orders has a single origin. We suggest that the final steps from an ambiguous intermediate situation to a non-canonical code have been completed in the Trentepohliales, Dasycladales, Cladophorales and Blastophysa but not in the Bryopsidales. We hypothesize that in the latter lineage an initial stage characterized by translational ambiguity was not followed by final
Predictive coding as a model of cognition.
Spratling, M W
2016-08-01
Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562
Distributed magnetic field positioning system using code division multiple access
NASA Technical Reports Server (NTRS)
Prigge, Eric A. (Inventor)
2003-01-01
An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Selective video encryption of a distributed coded bitstream using LDPC codes
NASA Astrophysics Data System (ADS)
Um, Hwayoung; Delp, Edward J.
2006-02-01
Selective encryption is a technique that is used to minimizec omputational complexity or enable system functionality by only encrypting a portion of a compressed bitstream while still achieving reasonable security. For selective encryption to work, we need to rely not only on the beneficial effects of redundancy reduction, but also on the characteristics of the compression algorithm to concentrate important data representing the source in a relatively small fraction of the compressed bitstream. These important elements of the compressed data become candidates for selective encryption. In this paper, we combine encryption and distributed video source coding to consider the choices of which types of bits are most effective for selective encryption of a video sequence that has been compressed using a distributed source coding method based on LDPC codes. Instead of encrypting the entire video stream bit by bit, we encrypt only the highly sensitive bits. By combining the compression and encryption tasks and thus reducing the number of bits encrypted, we can achieve a reduction in system complexity.
Internal Dosimetry Code System Using Biokinetics Models
2003-11-12
Version 00 InDose is an internal dosimetry code to calculate dose estimations using biokinetic models (presented in ICRP-56 to ICRP71) as well as older ones. The code uses the ICRP-66 respiratory tract model and the ICRP-30 gastrointestinal tract model as well as the new and old biokinetic models. The code was written in such a way that the user can change any parameters of any one of the models without recompiling the code. All parametersmore » are given in well annotated parameters files that the user may change. As default, these files contain the values listed in ICRP publications. The full InDose code was planned to have three parts: 1) the main part includes the uptake and systemic models and is used to calculate the activities in the body tissues and excretion as a function of time for a given intake. 2) An optimization module for automatic estimation of the intake for a specific exposure case. 3) A module to calculate the dose due to the estimated intake. Currently, the code is able to perform only it`s main task (part 1) while the other two have to be done externally using other tools. In the future, developers would like to add these modules in order to provide a complete solution. The code was tested extensively to verify accuracy of its results. The verification procedure was divided into three parts: 1) verification of the implementation of each model, 2) verification of the integrity of the whole code, and 3) usability test. The first two parts consisted of comparing results obtained with InDose to published results for the same cases. For example ICRP-78 monitoring data. The last part consisted of participating in the 3rd EIE-IDA and assessing some of the scenarios provided in this exercise. These tests where presented in a few publications. Good agreement was found between the results of InDose and published data.« less
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Practical distributed video coding in packet lossy channels
NASA Astrophysics Data System (ADS)
Qing, Linbo; Masala, Enrico; He, Xiaohai
2013-07-01
Improving error resilience of video communications over packet lossy channels is an important and tough task. We present a framework to optimize the quality of video communications based on distributed video coding (DVC) in practical packet lossy network scenarios. The peculiar characteristics of DVC indeed require a number of adaptations to take full advantage of its intrinsic robustness when dealing with data losses of typical real packet networks. This work proposes a new packetization scheme, an investigation of the best error-correcting codes to use in a noisy environment, a practical rate-allocation mechanism, which minimizes decoder feedback, and an improved side-information generation and reconstruction function. Performance comparisons are presented with respect to a conventional packet video communication using H.264/advanced video coding (AVC). Although currently the H.264/AVC rate-distortion performance in case of no loss is better than state-of-the-art DVC schemes, under practical packet lossy conditions, the proposed techniques provide better performance with respect to an H.264/AVC-based system, especially at high packet loss rates. Thus the error resilience of the proposed DVC scheme is superior to the one provided by H.264/AVC, especially in the case of transmission over packet lossy networks.
Robust video transmission with distributed source coded auxiliary channel.
Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan
2009-12-01
We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints. PMID:19703801
Behavioral correlates of the distributed coding of spatial context.
Anderson, Michael I; Killing, Sarah; Morris, Caitlin; O'Donoghue, Alan; Onyiagha, Dikennam; Stevenson, Rosemary; Verriotis, Madeleine; Jeffery, Kathryn J
2006-01-01
Hippocampal place cells respond heterogeneously to elemental changes of a compound spatial context, suggesting that they form a distributed code of context, whereby context information is shared across a population of neurons. The question arises as to what this distributed code might be useful for. The present study explored two possibilities: one, that it allows contexts with common elements to be disambiguated, and the other, that it allows a given context to be associated with more than one outcome. We used two naturalistic measures of context processing in rats, rearing and thigmotaxis (boundary-hugging), to explore how rats responded to contextual novelty and to relate this to the behavior of place cells. In experiment 1, rats showed dishabituation of rearing to a novel reconfiguration of familiar context elements, suggesting that they perceived the reconfiguration as novel, a behavior that parallels that of place cells in a similar situation. In experiment 2, rats were trained in a place preference task on an open-field arena. A change in the arena context triggered renewed thigmotaxis, and yet navigation continued unimpaired, indicating simultaneous representation of both the altered contextual and constant spatial cues. Place cells similarly exhibited a dual population of responses, consistent with the hypothesis that their activity underlies spatial behavior. Together, these experiments suggest that heterogeneous context encoding (or "partial remapping") by place cells may function to allow the flexible assignment of associations to contexts, a faculty that could be useful in episodic memory encoding. PMID:16921500
Pressure distribution based optimization of phase-coded acoustical vortices
Zheng, Haixiang; Gao, Lu; Dai, Yafei; Ma, Qingyu; Zhang, Dong
2014-02-28
Based on the acoustic radiation of point source, the physical mechanism of phase-coded acoustical vortices is investigated with formulae derivations of acoustic pressure and vibration velocity. Various factors that affect the optimization of acoustical vortices are analyzed. Numerical simulations of the axial, radial, and circular pressure distributions are performed with different source numbers, frequencies, and axial distances. The results prove that the acoustic pressure of acoustical vortices is linearly proportional to the source number, and lower fluctuations of circular pressure distributions can be produced for more sources. With the increase of source frequency, the acoustic pressure of acoustical vortices increases accordingly with decreased vortex radius. Meanwhile, increased vortex radius with reduced acoustic pressure is also achieved for longer axial distance. With the 6-source experimental system, circular and radial pressure distributions at various frequencies and axial distances have been measured, which have good agreements with the results of numerical simulations. The favorable results of acoustic pressure distributions provide theoretical basis for further studies of acoustical vortices.
SORD: A New Rupture Dynamics Modeling Code
NASA Astrophysics Data System (ADS)
Ely, G.; Minster, B.; Day, S.
2005-12-01
We report on our progress in validating our rupture dynamics modeling code, capable of dealing with nonplanar faults and surface topography. The method uses a "mimetic" approach to model spontaneous rupture on a fault within a 3D isotropic anelastic solid, wherein the equations of motion are approximated with a second order Support-Operator method on a logically rectangular mesh. Grid cells are not required to be parallelepipeds, however, so that non-rectangular meshes can be supported to model complex regions. However, for areas in the mesh which are in fact rectangular, the code uses a streamlined version of the algorithm that takes advantage of the simplifications of the operators in such areas. The fault itself is modeled using a double node technique, and the rheology on the fault surface is modeled through a slip-weakening, frictional, internal boundary condition. The Support Operator Rupture Dynamics (SORD) code, was prototyped in MATLAB, and all algorithms have been validated against known (including analytical solutions, eg Kostrov, 1964) solutions or previously validated solutions. This validation effort is conducted in the context of the SCEC Dynamic Rupture model validation effort led by R. Archuleta and R. Harris. Absorbing boundaries at the model edges are handled using the perfectly matched layers method (PML) (Olsen & Marcinkovich, 2003). PML is shown to work extremely well on rectangular meshes. We show that our implementation is also effective on non-rectangular meshes under the restriction that the boundary be planar. For validation of the model we use a variety of test cases using two types of meshes: a rectangular mesh and skewed mesh. The skewed mesh amplifies any biases caused by the Support-Operator method on non-rectangular elements. Wave propagation and absorbing boundaries are tested with a spherical wave source. Rupture dynamics on a planar fault are tested against (1) a Kostrov analytical solution, (2) data from foam rubber scale models
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...
28 CFR 36.607 - Guidance concerning model codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Guidance concerning model codes. 36.607... Codes § 36.607 Guidance concerning model codes. Upon application by an authorized representative of a private entity responsible for developing a model code, the Assistant Attorney General may review...
Rapid installation of numerical models in multiple parent codes
Brannon, R.M.; Wong, M.K.
1996-10-01
A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.
Adaptive distributed video coding with correlation estimation using expectation propagation
NASA Astrophysics Data System (ADS)
Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel
2012-10-01
Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.
Myers, B.F.; Montgomery, F.C.; Morris, R.N.
1993-08-01
The equivalent sphere model, which is widely used in calculating the release of fission gases from nuclear fuel, is idealized. The model is based on the diffusion of fission products in and their escape from a homogeneous sphere of fuel; the fission products are generated at a constant rate and undergo radiodecay. The fuel is assumed to be a set of spherical particles with a common radius. The value of the radius is such that the surface-to-volume ratio, S/V, of the set of spherical particles is the same as the S/V of the fuel mass of interest. The release rate depends on the dimensionless quantity {lambda}a{sup 2}/D where {lambda} is the radiodecay constant, a, the equivalent sphere radius and D, the diffusion coefficient. In the limit {lambda}t {much_gt} 1, the steady-state fractional release for isotopes with half-lives less than about 5 d is given by the familiar relation R/B = 3{radical}D/{lambda}a{sup 2} (1). For the spherical particles, S/V = 3/a. However, in important cases, the assumption of a single value of a is inappropriate. Examples of configurations for which multiple values of a are appropriate include powders, hydrolyzed fuel kernels, normally configured HTR fuel particles and perhaps, fuel kernels alone. In the latter case, one can imagine a distribution of values of a whose mean yields the value appropriate for agreement of Eq. (1) with measurement.
FPGA based digital phase-coding quantum key distribution system
NASA Astrophysics Data System (ADS)
Lu, XiaoMing; Zhang, LiJun; Wang, YongGang; Chen, Wei; Huang, DaJun; Li, Deng; Wang, Shuang; He, DeYong; Yin, ZhenQiang; Zhou, Yu; Hui, Cong; Han, ZhengFu
2015-12-01
Quantum key distribution (QKD) is a technology with the potential capability to achieve information-theoretic security. Phasecoding is an important approach to develop practical QKD systems in fiber channel. In order to improve the phase-coding modulation rate, we proposed a new digital-modulation method in this paper and constructed a compact and robust prototype of QKD system using currently available components in our lab to demonstrate the effectiveness of the method. The system was deployed in laboratory environment over a 50 km fiber and continuously operated during 87 h without manual interaction. The quantum bit error rate (QBER) of the system was stable with an average value of 3.22% and the secure key generation rate is 8.91 kbps. Although the modulation rate of the photon in the demo system was only 200 MHz, which was limited by the Faraday-Michelson interferometer (FMI) structure, the proposed method and the field programmable gate array (FPGA) based electronics scheme have a great potential for high speed QKD systems with Giga-bits/second modulation rate.
24 CFR 200.926b - Model codes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in...
24 CFR 200.926b - Model codes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in...
49 CFR 41.120 - Acceptable model codes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 1 2014-10-01 2014-10-01 false Acceptable model codes. 41.120 Section 41.120 Transportation Office of the Secretary of Transportation SEISMIC SAFETY § 41.120 Acceptable model codes. (a) This... of this part. (b)(1) The following are model codes which have been found to provide a level...
24 CFR 200.926b - Model codes.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in...
49 CFR 41.120 - Acceptable model codes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false Acceptable model codes. 41.120 Section 41.120 Transportation Office of the Secretary of Transportation SEISMIC SAFETY § 41.120 Acceptable model codes. (a) This... of this part. (b)(1) The following are model codes which have been found to provide a level...
24 CFR 200.926b - Model codes.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in...
24 CFR 200.926b - Model codes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Model codes. 200.926b Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.926b Model codes. (a) Incorporation by reference. The following model code publications are incorporated by reference in...
Modeling Inhibitory Interneurons in Efficient Sensory Coding Models
Zhu, Mengchen; Rozell, Christopher J.
2015-01-01
There is still much unknown regarding the computational role of inhibitory cells in the sensory cortex. While modeling studies could potentially shed light on the critical role played by inhibition in cortical computation, there is a gap between the simplicity of many models of sensory coding and the biological complexity of the inhibitory subpopulation. In particular, many models do not respect that inhibition must be implemented in a separate subpopulation, with those inhibitory interneurons having a diversity of tuning properties and characteristic E/I cell ratios. In this study we demonstrate a computational framework for implementing inhibition in dynamical systems models that better respects these biophysical observations about inhibitory interneurons. The main approach leverages recent work related to decomposing matrices into low-rank and sparse components via convex optimization, and explicitly exploits the fact that models and input statistics often have low-dimensional structure that can be exploited for efficient implementations. While this approach is applicable to a wide range of sensory coding models (including a family of models based on Bayesian inference in a linear generative model), for concreteness we demonstrate the approach on a network implementing sparse coding. We show that the resulting implementation stays faithful to the original coding goals while using inhibitory interneurons that are much more biophysically plausible. PMID:26172289
Modeling Inhibitory Interneurons in Efficient Sensory Coding Models.
Zhu, Mengchen; Rozell, Christopher J
2015-07-01
There is still much unknown regarding the computational role of inhibitory cells in the sensory cortex. While modeling studies could potentially shed light on the critical role played by inhibition in cortical computation, there is a gap between the simplicity of many models of sensory coding and the biological complexity of the inhibitory subpopulation. In particular, many models do not respect that inhibition must be implemented in a separate subpopulation, with those inhibitory interneurons having a diversity of tuning properties and characteristic E/I cell ratios. In this study we demonstrate a computational framework for implementing inhibition in dynamical systems models that better respects these biophysical observations about inhibitory interneurons. The main approach leverages recent work related to decomposing matrices into low-rank and sparse components via convex optimization, and explicitly exploits the fact that models and input statistics often have low-dimensional structure that can be exploited for efficient implementations. While this approach is applicable to a wide range of sensory coding models (including a family of models based on Bayesian inference in a linear generative model), for concreteness we demonstrate the approach on a network implementing sparse coding. We show that the resulting implementation stays faithful to the original coding goals while using inhibitory interneurons that are much more biophysically plausible. PMID:26172289
Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.
2015-01-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126
Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C
2015-12-01
Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. PMID:26098126
Galactic Cosmic Ray Event-Based Risk Model (GERM) Code
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Plante, Ianik; Ponomarev, Artem L.; Kim, Myung-Hee Y.
2013-01-01
This software describes the transport and energy deposition of the passage of galactic cosmic rays in astronaut tissues during space travel, or heavy ion beams in patients in cancer therapy. Space radiation risk is a probability distribution, and time-dependent biological events must be accounted for physical description of space radiation transport in tissues and cells. A stochastic model can calculate the probability density directly without unverified assumptions about shape of probability density function. The prior art of transport codes calculates the average flux and dose of particles behind spacecraft and tissue shielding. Because of the signaling times for activation and relaxation in the cell and tissue, transport code must describe temporal and microspatial density of functions to correlate DNA and oxidative damage with non-targeted effects of signals, bystander, etc. These are absolutely ignored or impossible in the prior art. The GERM code provides scientists data interpretation of experiments; modeling of beam line, shielding of target samples, and sample holders; and estimation of basic physical and biological outputs of their experiments. For mono-energetic ion beams, basic physical and biological properties are calculated for a selected ion type, such as kinetic energy, mass, charge number, absorbed dose, or fluence. Evaluated quantities are linear energy transfer (LET), range (R), absorption and fragmentation cross-sections, and the probability of nuclear interactions after 1 or 5 cm of water equivalent material. In addition, a set of biophysical properties is evaluated, such as the Poisson distribution for a specified cellular area, cell survival curves, and DNA damage yields per cell. Also, the GERM code calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle in a selected material. The GERM code makes the numerical estimates of basic
NASA Astrophysics Data System (ADS)
Reale, F.; Barbera, M.; Sciortino, S.
1992-11-01
We illustrate a general and straightforward approach to develop FORTRAN parallel two-dimensional data-domain applications on distributed-memory systems, such as those based on transputers. We have aimed at achieving flexibility for different processor topologies and processor numbers, non-homogeneous processor configurations and coarse load-balancing. We have assumed a master-slave architecture as basic programming model in the framework of a domain decomposition approach. After developing a library of high-level general network and communication routines, based on low-level system-dependent libraries, we have used it to parallelize some specific applications: an elementary 2-D code, useful as a pattern and guide for other more complex applications, and a 2-D hydrodynamic code for astrophysical studies. Code parallelization is achieved by splitting the original code into two independent codes, one for the master and the other for the slaves, and then by adding coordinated calls to network setting and message-passing routines into the programs. The parallel applications have been implemented on a Meiko Computing Surface hosted by a SUN 4 workstation and running CSTools software package. After the basic network and communication routines were developed, the task of parallelizing the 2-D hydrodynamic code took approximately 12 man hours. The parallel efficiency of the code ranges between 98% and 58% on arrays between 2 and 20 T800 transputers, on a relatively small computational mesh (≈3000 cells). Arrays consisting of a limited number of faster Intel i860 processors achieve a high parallel efficiency on large computational grids (> 10000 grid points) with performances in the class of minisupercomputers.
Modeling Proton- and Light Ion-Induced Reactions at Low Energies in the MARS15 Code
Rakhno, I. L.; Mokhov, N. V.; Gudima, K. K.
2015-04-25
An implementation of both ALICE code and TENDL evaluated nuclear data library in order to describe nuclear reactions induced by low-energy projectiles in the Monte Carlo code MARS15 is presented. Comparisons between results of modeling and experimental data on reaction cross sections and secondary particle distributions are shown.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
NASA Astrophysics Data System (ADS)
García, José A.; Alvarez, Samantha; Flores, Alejandro; Govezensky, Tzipe; Bobadilla, Juan R.; José, Marco V.
2004-10-01
The genetic code is considered to be universal. In order to test if some statistical properties of the coding bacterial genome were due to inherent properties of the genetic code, we compared the autocorrelation function, the scaling properties and the maximum entropy of the distribution of distances of amino acids in sequences obtained by translating protein-coding regions from the genome of Borrelia burgdorferi, under different genetic codes. Overall our results indicate that these properties are very stable to perturbations made by altering the genetic code. We also discuss the evolutionary likely implications of the present results.
Simulink Code Generation: Tutorial for Generating C Code from Simulink Models using Simulink Coder
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This document explains all the necessary steps in order to generate optimized C code from Simulink Models. This document also covers some general information on good programming practices, selection of variable types, how to organize models and subsystems, and finally how to test the generated C code and compare it with data from MATLAB.
On the binary weight distribution of some Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
Consider an (n,k) linear code with symbols from GF(2 sup M). If each code symbol is represented by a m-tuple over GF(2) using certain basis for GF(2 sup M), a binary (nm,km) linear code is obtained. The weight distribution of a binary linear code obtained in this manner is investigated. Weight enumerators for binary linear codes obtained from Reed-Solomon codes over GF(2 sup M) generated by polynomials, (X-alpha), (X-l)(X-alpha), (X-alpha)(X-alpha squared) and (X-l)(X-alpha)(X-alpha squared) and their extended codes are presented, where alpha is a primitive element of GF(2 sup M). Binary codes derived from Reed-Solomon codes are often used for correcting multiple bursts of errors.
Simple models for reading neuronal population codes.
Seung, H S; Sompolinsky, H
1993-01-01
In many neural systems, sensory information is distributed throughout a population of neurons. We study simple neural network models for extracting this information. The inputs to the networks are the stochastic responses of a population of sensory neurons tuned to directional stimuli. The performance of each network model in psychophysical tasks is compared with that of the optimal maximum likelihood procedure. As a model of direction estimation in two dimensions, we consider a linear network that computes a population vector. Its performance depends on the width of the population tuning curves and is maximal for width, which increases with the level of background activity. Although for narrowly tuned neurons the performance of the population vector is significantly inferior to that of maximum likelihood estimation, the difference between the two is small when the tuning is broad. For direction discrimination, we consider two models: a perceptron with fully adaptive weights and a network made by adding an adaptive second layer to the population vector network. We calculate the error rates of these networks after exhaustive training to a particular direction. By testing on the full range of possible directions, the extent of transfer of training to novel stimuli can be calculated. It is found that for threshold linear networks the transfer of perceptual learning is nonmonotonic. Although performance deteriorates away from the training stimulus, it peaks again at an intermediate angle. This nonmonotonicity provides an important psychophysical test of these models. PMID:8248166
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided. PMID:26019004
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
ERIC Educational Resources Information Center
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
Astrophysical Plasmas: Codes, Models, and Observations
NASA Astrophysics Data System (ADS)
Canto, Jorge; Rodriguez, Luis F.
2000-05-01
The conference Astrophysical Plasmas: Codes, Models, and Observations was aimed at discussing the most recent advances, arid some of the avenues for future work, in the field of cosmical plasmas. It was held (hiring the week of October 25th to 29th 1999, at the Centro Nacional de las Artes (CNA) in Mexico City, Mexico it modern and impressive center of theaters and schools devoted to the performing arts. This was an excellent setting, for reviewing the present status of observational (both on earth and in space) arid theoretical research. as well as some of the recent advances of laboratory research that are relevant, to astrophysics. The demography of the meeting was impressive: 128 participants from 12 countries in 4 continents, a large fraction of them, 29% were women and most of them were young persons (either recent Ph.Ds. or graduate students). This created it very lively and friendly atmosphere that made it easy to move from the ionization of the Universe and high-redshift absorbers, to Active Galactic Nucleotides (AGN)s and X-rays from galaxies, to the gas in the Magellanic Clouds and our Galaxy, to the evolution of H II regions and Planetary Nebulae (PNe), and to the details of plasmas in the Solar System and the lab. All these topics were well covered with 23 invited talks, 43 contributed talks. and 22 posters. Most of them are contained in these proceedings, in the same order of the presentations.
SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters
Leal, L.C.
1995-01-01
The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters
Leal, L.C.; Larson, N.M.
1995-09-01
The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
Review and verification of CARE 3 mathematical model and code
NASA Technical Reports Server (NTRS)
Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.
1983-01-01
The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.
A New Solution of Distributed Disaster Recovery Based on Raptor Code
NASA Astrophysics Data System (ADS)
Deng, Kai; Wang, Kaiyun; Ma, Danyang
For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Model code provisions for use in partially accepted code jurisdictions. 200.926c Section 200.926c Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER, DEPARTMENT...
Numerical MHD codes for modeling astrophysical flows
NASA Astrophysics Data System (ADS)
Koldoba, A. V.; Ustyugova, G. V.; Lii, P. S.; Comins, M. L.; Dyda, S.; Romanova, M. M.; Lovelace, R. V. E.
2016-05-01
We describe a Godunov-type magnetohydrodynamic (MHD) code based on the Miyoshi and Kusano (2005) solver which can be used to solve various astrophysical hydrodynamic and MHD problems. The energy equation is in the form of entropy conservation. The code has been implemented on several different coordinate systems: 2.5D axisymmetric cylindrical coordinates, 2D Cartesian coordinates, 2D plane polar coordinates, and fully 3D cylindrical coordinates. Viscosity and diffusivity are implemented in the code to control the accretion rate in the disk and the rate of penetration of the disk matter through the magnetic field lines. The code has been utilized for the numerical investigations of a number of different astrophysical problems, several examples of which are shown.
Joint Channel-Network Coding (JCNC) for Distributed Storage in Wireless Network
NASA Astrophysics Data System (ADS)
Wang, Ning; Lin, Jiaru
We propose to construct a joint channel-network coding (knosswn as Random Linear Coding) scheme based on improved turbo codes for the distributed storage in wireless communication network with k data nodes, s storage nodes (kdistributed storage with erasure channel to AWGN and fading channel scenario. We investigate the throughput performance of the Joint Channel-Network Coding (JCNC) system benefits from network coding, compared with that of system without network coding based only on store and forward (S-F) approach. Another helpful parameter: node degree (L) indicates how many storage nodes one data packet should fall onto. L characterizes the en/decoding complexity of the system. Moreover, this proposed framework can be extended to ad-hoc and sensor network easily.
Energy standards and model codes development, adoption, implementation, and enforcement
Conover, D.R.
1994-08-01
This report provides an overview of the energy standards and model codes process for the voluntary sector within the United States. The report was prepared by Pacific Northwest Laboratory (PNL) for the Building Energy Standards Program and is intended to be used as a primer or reference on this process. Building standards and model codes that address energy have been developed by organizations in the voluntary sector since the early 1970s. These standards and model codes provide minimum energy-efficient design and construction requirements for new buildings and, in some instances, existing buildings. The first step in the process is developing new or revising existing standards or codes. There are two overall differences between standards and codes. Energy standards are developed by a consensus process and are revised as needed. Model codes are revised on a regular annual cycle through a public hearing process. In addition to these overall differences, the specific steps in developing/revising energy standards differ from model codes. These energy standards or model codes are then available for adoption by states and local governments. Typically, energy standards are adopted by or adopted into model codes. Model codes are in turn adopted by states through either legislation or regulation. Enforcement is essential to the implementation of energy standards and model codes. Low-rise residential construction is generally evaluated for compliance at the local level, whereas state agencies tend to be more involved with other types of buildings. Low-rise residential buildings also may be more easily evaluated for compliance because the governing requirements tend to be less complex than for commercial buildings.
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... CFR part 51. The incorporation by reference of these publications has been approved by the Director...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... CFR part 51. The incorporation by reference of these publications has been approved by the Director...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... CFR part 51. The incorporation by reference of these publications has been approved by the Director...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... CFR part 51. The incorporation by reference of these publications has been approved by the Director...
24 CFR 200.925c - Model codes.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model codes. 200.925c Section 200... DEVELOPMENT GENERAL INTRODUCTION TO FHA PROGRAMS Minimum Property Standards § 200.925c Model codes. (a... CFR part 51. The incorporation by reference of these publications has been approved by the Director...
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes....
40 CFR 194.23 - Models and computer codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... COMPLIANCE WITH THE 40 CFR PART 191 DISPOSAL REGULATIONS Compliance Certification and Re-certification General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Models and computer codes....
SAMICS marketing and distribution model
NASA Technical Reports Server (NTRS)
1978-01-01
A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.
CFD code evaluation for internal flow modeling
NASA Technical Reports Server (NTRS)
Chung, T. J.
1990-01-01
Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
Source coding with escort distributions and Rényi entropy bounds
NASA Astrophysics Data System (ADS)
Bercher, J.-F.
2009-08-01
We discuss the interest of escort distributions and Rényi entropy in the context of source coding. We first recall a source coding theorem by Campbell relating a generalized measure of length to the Rényi-Tsallis entropy. We show that the associated optimal codes can be obtained using considerations on escort-distributions. We propose a new family of measure of length involving escort-distributions and we show that these generalized lengths are also bounded below by the Rényi entropy. Furthermore, we obtain that the standard Shannon codes lengths are optimum for the new generalized lengths measures, whatever the entropic index. Finally, we show that there exists in this setting an interplay between standard and escort distributions.
Quantum circuit for optimal eavesdropping in quantum key distribution using phase-time coding
Kronberg, D. A.; Molotkov, S. N.
2010-07-15
A quantum circuit is constructed for optimal eavesdropping on quantum key distribution proto- cols using phase-time coding, and its physical implementation based on linear and nonlinear fiber-optic components is proposed.
The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data
NASA Astrophysics Data System (ADS)
Ilic, Radovan D.; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos
2005-03-01
This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.
The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data.
Ilić, Radovan D; Spasić-Jokić, Vesna; Belicev, Petar; Dragović, Milos
2005-03-01
This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour. PMID:15798273
Modeling Natural Variation through Distribution
ERIC Educational Resources Information Center
Lehrer, Richard; Schauble, Leona
2004-01-01
This design study tracks the development of student thinking about natural variation as late elementary grade students learned about distribution in the context of modeling plant growth at the population level. The data-modeling approach assisted children in coordinating their understanding of particular cases with an evolving notion of data as an…
Parallel Processing of Distributed Video Coding to Reduce Decoding Time
NASA Astrophysics Data System (ADS)
Tonomura, Yoshihide; Nakachi, Takayuki; Fujii, Tatsuya; Kiya, Hitoshi
This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].
The GNASH preequilibrium-statistical nuclear model code
Arthur, E. D.
1988-01-01
The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on /sup 58/Ni. the second illustrates the fission model capabilities implemented in the code and involves n + /sup 235/U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs.
Aerosol kinetic code "AERFORM": Model, validation and simulation results
NASA Astrophysics Data System (ADS)
Gainullin, K. G.; Golubev, A. I.; Petrov, A. M.; Piskunov, V. N.
2016-06-01
The aerosol kinetic code "AERFORM" is modified to simulate droplet and ice particle formation in mixed clouds. The splitting method is used to calculate condensation and coagulation simultaneously. The method is calibrated with analytic solutions of kinetic equations. Condensation kinetic model is based on cloud particle growth equation, mass and heat balance equations. The coagulation kinetic model includes Brownian, turbulent and precipitation effects. The real values are used for condensation and coagulation growth of water droplets and ice particles. The model and the simulation results for two full-scale cloud experiments are presented. The simulation model and code may be used autonomously or as an element of another code.
A velocity-dependent anomalous radial transport model for (2-D, 2-V) kinetic transport codes
NASA Astrophysics Data System (ADS)
Bodi, Kowsik; Krasheninnikov, Sergei; Cohen, Ron; Rognlien, Tom
2008-11-01
Plasma turbulence constitutes a significant part of radial plasma transport in magnetically confined plasmas. This turbulent transport is modeled in the form of anomalous convection and diffusion coefficients in fluid transport codes. There is a need to model the same in continuum kinetic edge codes [such as the (2-D, 2-V) transport version of TEMPEST, NEO, and the code being developed by the Edge Simulation Laboratory] with non-Maxwellian distributions. We present an anomalous transport model with velocity-dependent convection and diffusion coefficients leading to a diagonal transport matrix similar to that used in contemporary fluid transport models (e.g., UEDGE). Also presented are results of simulations corresponding to radial transport due to long-wavelength ExB turbulence using a velocity-independent diffusion coefficient. A BGK collision model is used to enable comparison with fluid transport codes.
Modeling Nucleon Generalized Parton Distributions
Radyushkin, Anatoly V.
2013-05-01
We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We find that the usual "DD+D-term'' construction should be amended by an extra term, generated by GPD E(x,\\xi). Unlike the $D$-term, this function has support in the whole -1 < x< 1 region, and in general does not vanish at the border points|x|=\\xi.
Modeling Planet-Building Stellar Disks with Radiative Transfer Code
NASA Astrophysics Data System (ADS)
Swearingen, Jeremy R.; Sitko, Michael L.; Whitney, Barbara; Grady, Carol A.; Wagner, Kevin Robert; Champney, Elizabeth H.; Johnson, Alexa N.; Warren, Chelsea C.; Russell, Ray W.; Hammel, Heidi B.; Lisse, Casey M.; Cure, Michel; Kraus, Stefan; Fukagawa, Misato; Calvet, Nuria; Espaillat, Catherine; Monnier, John D.; Millan-Gabet, Rafael; Wilner, David J.
2015-01-01
Understanding the nature of the many planetary systems found outside of our own solar system cannot be completed without knowledge of the beginnings these systems. By detecting planets in very young systems and modeling the disks of material around stars from which they form, we can gain a better understanding of planetary origin and evolution. The efforts presented here have been in modeling two pre-transitional disk systems using a radiative transfer code. With the first of these systems, V1247 Ori, a model that fits the spectral energy distribution (SED) well and whose parameters are consistent with existing interferometry data (Kraus et al 2013) has been achieved. The second of these two systems, SAO 206462, has presented a different set of challenges but encouraging SED agreement between the model and known data gives hope that the model can produce images that can be used in future interferometry work. This work was supported by NASA ADAP grant NNX09AC73G, and the IR&D program at The Aerospace Corporation.
Computer code for the calculation of the temperature distribution of cooled turbine blades
NASA Astrophysics Data System (ADS)
Tietz, Thomas A.; Koschel, Wolfgang W.
A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.
RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1
1995-08-01
The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes.
28 CFR 36.608 - Guidance concerning model codes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Guidance concerning model codes. 36.608 Section 36.608 Judicial Administration DEPARTMENT OF JUSTICE NONDISCRIMINATION ON THE BASIS OF DISABILITY BY PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.608 Guidance concerning...
Diffusion approximation for modeling of 3-D radiation distributions
Zardecki, A.; Gerstl, S.A.W.; De Kinder, R.E. Jr.
1985-01-01
A three-dimensional transport code DIF3D, based on the diffusion approximation, is used to model the spatial distribution of radiation energy arising from volumetric isotropic sources. Future work will be concerned with the determination of irradiances and modeling of realistic scenarios, relevant to the battlefield conditions. 8 refs., 4 figs.
EM modeling for GPIR using 3D FDTD modeling codes
Nelson, S.D.
1994-10-01
An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.
Monte Carlo N Particle code - Dose distribution of clinical electron beams in inhomogeneous phantoms
Nedaie, H. A.; Mosleh-Shirazi, M. A.; Allahverdi, M.
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
Nedaie, H A; Mosleh-Shirazi, M A; Allahverdi, M
2013-01-01
Electron dose distributions calculated using the currently available analytical methods can be associated with large uncertainties. The Monte Carlo method is the most accurate method for dose calculation in electron beams. Most of the clinical electron beam simulation studies have been performed using non- MCNP [Monte Carlo N Particle] codes. Given the differences between Monte Carlo codes, this work aims to evaluate the accuracy of MCNP4C-simulated electron dose distributions in a homogenous phantom and around inhomogeneities. Different types of phantoms ranging in complexity were used; namely, a homogeneous water phantom and phantoms made of polymethyl methacrylate slabs containing different-sized, low- and high-density inserts of heterogeneous materials. Electron beams with 8 and 15 MeV nominal energy generated by an Elekta Synergy linear accelerator were investigated. Measurements were performed for a 10 cm × 10 cm applicator at a source-to-surface distance of 100 cm. Individual parts of the beam-defining system were introduced into the simulation one at a time in order to show their effect on depth doses. In contrast to the first scattering foil, the secondary scattering foil, X and Y jaws and applicator provide up to 5% of the dose. A 2%/2 mm agreement between MCNP and measurements was found in the homogenous phantom, and in the presence of heterogeneities in the range of 1-3%, being generally within 2% of the measurements for both energies in a "complex" phantom. A full-component simulation is necessary in order to obtain a realistic model of the beam. The MCNP4C results agree well with the measured electron dose distributions. PMID:23533162
High-capacity quantum Fibonacci coding for key distribution
NASA Astrophysics Data System (ADS)
Simon, David S.; Lawrence, Nate; Trevino, Jacob; Dal Negro, Luca; Sergienko, Alexander V.
2013-03-01
Quantum cryptography and quantum key distribution (QKD) have been the most successful applications of quantum information processing, highlighting the unique capability of quantum mechanics, through the no-cloning theorem, to securely share encryption keys between two parties. Here, we present an approach to high-capacity, high-efficiency QKD by exploiting cross-disciplinary ideas from quantum information theory and the theory of light scattering of aperiodic photonic media. We propose a unique type of entangled-photon source, as well as a physical mechanism for efficiently sharing keys. The key-sharing protocol combines entanglement with the mathematical properties of a recursive sequence to allow a realization of the physical conditions necessary for implementation of the no-cloning principle for QKD, while the source produces entangled photons whose orbital angular momenta (OAM) are in a superposition of Fibonacci numbers. The source is used to implement a particular physical realization of the protocol by randomly encoding the Fibonacci sequence onto entangled OAM states, allowing secure generation of long keys from few photons. Unlike in polarization-based protocols, reference frame alignment is unnecessary, while the required experimental setup is simpler than other OAM-based protocols capable of achieving the same capacity and its complexity grows less rapidly with increasing range of OAM used.
Modeling anomalous radial transport in kinetic transport codes
NASA Astrophysics Data System (ADS)
Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.
2009-11-01
Anomalous transport is typically the dominant component of the radial transport in magnetically confined plasmas, where the physical origin of this transport is believed to be plasma turbulence. A model is presented for anomalous transport that can be used in continuum kinetic edge codes like TEMPEST, NEO and the next-generation code being developed by the Edge Simulation Laboratory. The model can also be adapted to particle-based codes. It is demonstrated that the model with a velocity-dependent diffusion and convection terms can match a diagonal gradient-driven transport matrix as found in contemporary fluid codes, but can also include off-diagonal effects. The anomalous transport model is also combined with particle drifts and a particle/energy-conserving Krook collision operator to study possible synergistic effects with neoclassical transport. For the latter study, a velocity-independent anomalous diffusion coefficient is used to mimic the effect of long-wavelength ExB turbulence.
Video distribution system cost model
NASA Technical Reports Server (NTRS)
Gershkoff, I.; Haspert, J. K.; Morgenstern, B.
1980-01-01
A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.
Modeling Nucleon Generalized Parton Distributions
Radyushkin, Anatoly V.
2013-05-01
We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We found that the usual "DD+D-term" construction should be amended by an extra term, xiE^1_+ (x,xi) built from the alpha/Beta moment of the DD e(Beta,alpha) that generates GPD E(x,xi). Unlike the D-term, this function has support in the whole -1< x<1 region, and in general does not vanish at the border points |x|=xi.
Verification of thermal analysis codes for modeling solid rocket nozzles
NASA Astrophysics Data System (ADS)
Keyhani, M.
1993-05-01
One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher
Verification of thermal analysis codes for modeling solid rocket nozzles
NASA Technical Reports Server (NTRS)
Keyhani, M.
1993-01-01
One of the objectives of the Solid Propulsion Integrity Program (SPIP) at Marshall Space Flight Center (MSFC) is development of thermal analysis codes capable of accurately predicting the temperature field, pore pressure field and the surface recession experienced by decomposing polymers which are used as thermal barriers in solid rocket nozzles. The objective of this study is to provide means for verifications of thermal analysis codes developed for modeling of flow and heat transfer in solid rocket nozzles. In order to meet the stated objective, a test facility was designed and constructed for measurement of the transient temperature field in a sample composite subjected to a constant heat flux boundary condition. The heating was provided via a steel thin-foil with a thickness of 0.025 mm. The designed electrical circuit can provide a heating rate of 1800 W. The heater was sandwiched between two identical samples, and thus ensure equal power distribution between them. The samples were fitted with Type K thermocouples, and the exact location of the thermocouples were determined via X-rays. The experiments were modeled via a one-dimensional code (UT1D) as a conduction and phase change heat transfer process. Since the pyrolysis gas flow was in the direction normal to the heat flow, the numerical model could not account for the convection cooling effect of the pyrolysis gas flow. Therefore, the predicted values in the decomposition zone are considered to be an upper estimate of the temperature. From the analysis of the experimental and the numerical results the following are concluded: (1) The virgin and char specific heat data for FM 5055 as reported by SoRI can not be used to obtain any reasonable agreement between the measured temperatures and the predictions. However, use of virgin and char specific heat data given in Acurex report produced good agreement for most of the measured temperatures. (2) Constant heat flux heating process can produce a much higher
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon were developed. The following tasks were accomplished: (1) formulation of a model for silicon vapor separation/collection from the developing turbulent flow stream within reactors of the Westinghouse (2) modification of an available general parabolic code to achieve solutions to the governing partial differential equations (boundary layer type) which describe migration of the vapor to the reactor walls, (3) a parametric study using the boundary layer code to optimize the performance characteristics of the Westinghouse reactor, (4) calculations relating to the collection efficiency of the new AeroChem reactor, and (5) final testing of the modified LAPP code for use as a method of predicting Si(1) droplet sizes in these reactors.
NASA Astrophysics Data System (ADS)
Muanenda, Yonas; Oton, Claudio J.; Faralli, Stefano; Di Pasquale, Fabrizio
2015-07-01
We propose and experimentally demonstrate a Distributed Acoustic Sensor exploiting cyclic Simplex coding in a phase-sensitive OTDR on standard single mode fibers based on direct detection. Suitable design of the source and use of cyclic coding is shown to improve the SNR of the coherent back-scattered signal by up to 9 dB, reducing fading due to modulation instability and enabling accurate long-distance measurement of vibrations with minimal post-processing.
ADVANCED ELECTRIC AND MAGNETIC MATERIAL MODELS FOR FDTD ELECTROMAGNETIC CODES
Poole, B R; Nelson, S D; Langdon, S
2005-05-05
The modeling of dielectric and magnetic materials in the time domain is required for pulse power applications, pulsed induction accelerators, and advanced transmission lines. For example, most induction accelerator modules require the use of magnetic materials to provide adequate Volt-sec during the acceleration pulse. These models require hysteresis and saturation to simulate the saturation wavefront in a multipulse environment. In high voltage transmission line applications such as shock or soliton lines the dielectric is operating in a highly nonlinear regime, which require nonlinear models. Simple 1-D models are developed for fast parameterization of transmission line structures. In the case of nonlinear dielectrics, a simple analytic model describing the permittivity in terms of electric field is used in a 3-D finite difference time domain code (FDTD). In the case of magnetic materials, both rate independent and rate dependent Hodgdon magnetic material models have been implemented into 3-D FDTD codes and 1-D codes.
Quantization and psychoacoustic model in audio coding in advanced audio coding
NASA Astrophysics Data System (ADS)
Brzuchalski, Grzegorz
2011-10-01
This paper presents complete optimized architecture of Advanced Audio Coder quantization with Huffman coding. After that psychoacoustic model theory is presented and few algorithms described: standard Two Loop Search, its modifications, Genetic, Just Noticeable Level Difference, Trellis-Based and its modification: Cascaded Trellis-Based Algorithm.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1981-10-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
LMFBR models for the ORIGEN2 computer code
Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.
1983-06-01
Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.
The Overlap Model: A Model of Letter Position Coding
Ratcliff, Roger; Perea, Manuel
2008-01-01
Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that the position of each letter within a word is perfectly encoded. Thus, these models are unable to explain the presence of effects of letter transposition (trial-trail), letter migration (beard-bread), repeated letters (moose-mouse), or subset/superset effects (faulty-faculty). The authors extend R. Ratcliff's (1981) theory of order relations for encoding of letter positions and show that the model can successfully deal with these effects. The basic assumption is that letters in the visual stimulus have distributions over positions so that the representation of one letter will extend into adjacent letter positions. To test the model, the authors conducted a series of forced-choice perceptual identification experiments. The overlap model produced very good fits to the empirical data, and even a simplified 2-parameter model was capable of producing fits for 104 observed data points with a correlation coefficient of .91. PMID:18729592
The improved code TAC maker for modeling of planet transits
NASA Astrophysics Data System (ADS)
Kjurkchieva, D.; Dimitrov, D.; Vladev, A.
We present improvements of the code TAC-maker for modeling of planet transits. While the initial version of the code calculated synthetic transits for certain values of the input parameters, the new version TAC-maker 1.1.0 gives a possibility to obtain simultaneously numerous synthetic transits corresponding to chosen ranges of values for each fitted parameter. The most valuable property of the improved version of the code is the ability to obtain the global minimum of χ^{2} in the multidimensional parametric space and to estimate the errors of the searched parameters.
Modeling Guidelines for Code Generation in the Railway Signaling Context
NASA Technical Reports Server (NTRS)
Ferrari, Alessio; Bacherini, Stefano; Fantechi, Alessandro; Zingoni, Niccolo
2009-01-01
Modeling guidelines constitute one of the fundamental cornerstones for Model Based Development. Their relevance is essential when dealing with code generation in the safety-critical domain. This article presents the experience of a railway signaling systems manufacturer on this issue. Introduction of Model-Based Development (MBD) and code generation in the industrial safety-critical sector created a crucial paradigm shift in the development process of dependable systems. While traditional software development focuses on the code, with MBD practices the focus shifts to model abstractions. The change has fundamental implications for safety-critical systems, which still need to guarantee a high degree of confidence also at code level. Usage of the Simulink/Stateflow platform for modeling, which is a de facto standard in control software development, does not ensure by itself production of high-quality dependable code. This issue has been addressed by companies through the definition of modeling rules imposing restrictions on the usage of design tools components, in order to enable production of qualified code. The MAAB Control Algorithm Modeling Guidelines (MathWorks Automotive Advisory Board)[3] is a well established set of publicly available rules for modeling with Simulink/Stateflow. This set of recommendations has been developed by a group of OEMs and suppliers of the automotive sector with the objective of enforcing and easing the usage of the MathWorks tools within the automotive industry. The guidelines have been published in 2001 and afterwords revisited in 2007 in order to integrate some additional rules developed by the Japanese division of MAAB [5]. The scope of the current edition of the guidelines ranges from model maintainability and readability to code generation issues. The rules are conceived as a reference baseline and therefore they need to be tailored to comply with the characteristics of each industrial context. Customization of these
2010-01-01
Background Intragenic tandem repeats occur throughout all domains of life and impart functional and structural variability to diverse translation products. Repeat proteins confer distinctive surface phenotypes to many unicellular organisms, including those with minimal genomes such as the wall-less bacterial monoderms, Mollicutes. One such repeat pattern in this clade is distributed in a manner suggesting its exchange by horizontal gene transfer (HGT). Expanding genome sequence databases reveal the pattern in a widening range of bacteria, and recently among eucaryotic microbes. We examined the genomic flux and consequences of the motif by determining its distribution, predicted structural features and association with membrane-targeted proteins. Results Using a refined hidden Markov model, we document a 25-residue protein sequence motif tandemly arrayed in variable-number repeats in ORFs lacking assigned functions. It appears sporadically in unicellular microbes from disparate bacterial and eucaryotic clades, representing diverse lifestyles and ecological niches that include host parasitic, marine and extreme environments. Tracts of the repeats predict a malleable configuration of recurring domains, with conserved hydrophobic residues forming an amphipathic secondary structure in which hydrophilic residues endow extensive sequence variation. Many ORFs with these domains also have membrane-targeting sequences that predict assorted topologies; others may comprise reservoirs of sequence variants. We demonstrate expressed variants among surface lipoproteins that distinguish closely related animal pathogens belonging to a subgroup of the Mollicutes. DNA sequences encoding the tandem domains display dyad symmetry. Moreover, in some taxa the domains occur in ORFs selectively associated with mobile elements. These features, a punctate phylogenetic distribution, and different patterns of dispersal in genomes of related taxa, suggest that the repeat may be disseminated by
Not Available
1988-03-01
HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs.
Water Distribution and Removal Model
Y. Deng; N. Chipman; E.L. Hardin
2005-08-26
The design of the Yucca Mountain high level radioactive waste repository depends on the performance of the engineered barrier system (EBS). To support the total system performance assessment (TSPA), the Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is developed to describe the thermal, mechanical, chemical, hydrological, biological, and radionuclide transport processes within the emplacement drifts, which includes the following major analysis/model reports (AMRs): (1) EBS Water Distribution and Removal (WD&R) Model; (2) EBS Physical and Chemical Environment (P&CE) Model; (3) EBS Radionuclide Transport (EBS RNT) Model; and (4) EBS Multiscale Thermohydrologic (TH) Model. Technical information, including data, analyses, models, software, and supporting documents will be provided to defend the applicability of these models for their intended purpose of evaluating the postclosure performance of the Yucca Mountain repository system. The WD&R model ARM is important to the site recommendation. Water distribution and removal represents one component of the overall EBS. Under some conditions, liquid water will seep into emplacement drifts through fractures in the host rock and move generally downward, potentially contacting waste packages. After waste packages are breached by corrosion, some of this seepage water will contact the waste, dissolve or suspend radionuclides, and ultimately carry radionuclides through the EBS to the near-field host rock. Lateral diversion of liquid water within the drift will occur at the inner drift surface, and more significantly from the operation of engineered structures such as drip shields and the outer surface of waste packages. If most of the seepage flux can be diverted laterally and removed from the drifts before contacting the wastes, the release of radionuclides from the EBS can be controlled, resulting in a proportional reduction in dose release at the accessible environment. The purposes
Fluid-Rock Interaction Models: Code Release and Results
NASA Astrophysics Data System (ADS)
Bolton, E. W.
2006-12-01
Numerical models our group has developed for understanding the role of kinetic processes during fluid-rock interaction will be released free to the public. We will also present results that highlight the importance of kinetic processes. The author is preparing manuals describing the numerical methods used, as well as "how-to" guides for using the models. The release will include input files, full in-line code documentation of the FORTRAN source code, and instructions for use of model output for visualization and analysis. The aqueous phase (weathering) and supercritical (mixed-volatile metamorphic) fluid flow and reaction models for porous media will be released separately. These codes will be useful as teaching and research tools. The codes may be run on current generation personal computers. Although other codes are available for attacking some of the problems we address, unique aspects of our codes include sub-grid-scale grain models to track grain size changes, as well as dynamic porosity and permeability. Also, as the flow field can change significantly over the course of the simulation, efficient solution methods have been developed for the repeated solution of Poisson-type equations that arise from Darcy's law. These include sparse-matrix methods as well as the even more efficient spectral-transform technique. Results will be presented for kinetic control of reaction pathways and for heterogeneous media. Codes and documentation for modeling intra-grain diffusion of trace elements and isotopes, and exchange of these between grains and moving fluids will also be released. The unique aspect of this model is that it includes concurrent diffusion and grain growth or dissolution for multiple mineral types (low-diffusion regridding has been developed to deal with the moving-boundary problem at the fluid/mineral interface). Results for finite diffusion rates will be compared to batch and fractional melting models. Additional code and documentation will be released
Williamson, Nathan H; Nydén, Magnus; Röding, Magnus
2016-06-01
We present comprehensive derivations for the statistical models and methods for the use of pulsed gradient spin echo (PGSE) NMR to characterize the molecular weight distribution of polymers via the well-known scaling law relating diffusion coefficients and molecular weights. We cover the lognormal and gamma distribution models and linear combinations of these distributions. Although the focus is on methodology, we illustrate the use experimentally with three polystyrene samples, comparing the NMR results to gel permeation chromatography (GPC) measurements, test the accuracy and noise-sensitivity on simulated data, and provide code for implementation. PMID:27116223
NASA Astrophysics Data System (ADS)
Williamson, Nathan H.; Nydén, Magnus; Röding, Magnus
2016-06-01
We present comprehensive derivations for the statistical models and methods for the use of pulsed gradient spin echo (PGSE) NMR to characterize the molecular weight distribution of polymers via the well-known scaling law relating diffusion coefficients and molecular weights. We cover the lognormal and gamma distribution models and linear combinations of these distributions. Although the focus is on methodology, we illustrate the use experimentally with three polystyrene samples, comparing the NMR results to gel permeation chromatography (GPC) measurements, test the accuracy and noise-sensitivity on simulated data, and provide code for implementation.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
A distributed code for color in natural scenes derived from center-surround filtered cone signals
Kellner, Christian J.; Wachtler, Thomas
2013-01-01
In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN) and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for color. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in color space. Here we investigated how the non-linear spatio-chromatic filtering in the retina influences the encoding of color signals. Cone signals were derived from hyper-spectral images of natural scenes and preprocessed by center-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis (ICA) on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in color space, similar to the population code of color in the early visual cortex. Our results indicate that spatio-chromatic processing in the retina leads to a more distributed and more efficient code for natural scenes. PMID:24098289
SRVAL. Stock-Recruitment Model VALidation Code
Christensen, S.W.
1989-12-07
SRVAL is a computer simulation model of the Hudson River striped bass population. It was designed to aid in assessing the validity of curve-fits of the linearized Ricker stock-recruitment model, modified to incorporate multiple-age spawners and to include an environmental variable, to variously processed annual catch-per-unit-effort (CPUE) statistics for a fish population. It is sometimes asserted that curve-fits of this kind can be used to determine the sensitivity of fish populations to such man-induced stresses as entrainment and impingement at power plants. SRVAL was developed to test such assertions and was utilized in testimony written in connection with the Hudson River Power Case (U. S. Environmental Protection Agency, Region II).
Code System to Model Aqueous Geochemical Equilibria.
2001-08-23
Version: 00 MINTEQ is a geochemical program to model aqueous solutions and the interactions of aqueous solutions with hypothesized assemblages of solid phases. It was developed for the Environmental Protection Agency to perform the calculations necessary to simulate the contact of waste solutions with heterogeneous sediments or the interaction of ground water with solidified wastes. MINTEQ can calculate ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution ofsolid phases. MINTEQ can accept a finite massmore » for any solid considered for dissolution and will dissolve the specified solid phase only until its initial mass is exhausted. This ability enables MINTEQ to model flow-through systems. In these systems the masses of solid phases that precipitate at earlier pore volumes can be dissolved at later pore volumes according to thermodynamic constraints imposed by the solution composition and solid phases present. The ability to model these systems permits evaluation of the geochemistry of dissolved traced metals, such as low-level waste in shallow land burial sites. MINTEQ was designed to solve geochemical equilibria for systems composed of one kilogram of water, various amounts of material dissolved in solution, and any solid materials that are present. Systems modeled using MINTEQ can exchange energy and material (open systems) or just energy (closed systems) with the surrounding environment. Each system is composed of a number of phases. Every phase is a region with distinct composition and physically definable boundaries. All of the material in the aqueous solution forms one phase. The gas phase is composed of any gaseous material present, and each compositionally and structurally distinct solid forms a separate phase.« less
Processing of chemical sensor arrays with a biologically inspired model of olfactory coding.
Raman, Baranidharan; Sun, Ping A; Gutierrez-Galvez, Agustin; Gutierrez-Osuna, Ricardo
2006-07-01
This paper presents a computational model for chemical sensor arrays inspired by the first two stages in the olfactory pathway: distributed coding with olfactory receptor neurons and chemotopic convergence onto glomerular units. We propose a monotonic concentration-response model that maps conventional sensor-array inputs into a distributed activation pattern across a large population of neuroreceptors. Projection onto glomerular units in the olfactory bulb is then simulated with a self-organizing model of chemotopic convergence. The pattern recognition performance of the model is characterized using a database of odor patterns from an array of temperature modulated chemical sensors. The chemotopic code achieved by the proposed model is shown to improve the signal-to-noise ratio available at the sensor inputs while being consistent with results from neurobiology. PMID:16856663
Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT
NASA Technical Reports Server (NTRS)
Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.
2015-01-01
This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.
MATHEMATICAL MODEL OF ELECTROSTATIC PRECIPITATION (REVISION 3): SOURCE CODE
This tape contains the source code (FORTRAN) for Revision 3 of the Mathematical Model of Electrostatic Precipitation. Improvements found in Revision 3 of the model include a new method of calculating the solutions to the electric field equations, a dynamic method for calculating ...
Reduced Fast Ion Transport Model For The Tokamak Transport Code TRANSP
Podesta,, Mario; Gorelenkova, Marina; White, Roscoe
2014-02-28
Fast ion transport models presently implemented in the tokamak transport code TRANSP [R. J. Hawryluk, in Physics of Plasmas Close to Thermonuclear Conditions, CEC Brussels, 1 , 19 (1980)] are not capturing important aspects of the physics associated with resonant transport caused by instabilities such as Toroidal Alfv en Eigenmodes (TAEs). This work describes the implementation of a fast ion transport model consistent with the basic mechanisms of resonant mode-particle interaction. The model is formulated in terms of a probability distribution function for the particle's steps in phase space, which is consistent with the MonteCarlo approach used in TRANSP. The proposed model is based on the analysis of fast ion response to TAE modes through the ORBIT code [R. B. White et al., Phys. Fluids 27 , 2455 (1984)], but it can be generalized to higher frequency modes (e.g. Compressional and Global Alfv en Eigenmodes) and to other numerical codes or theories.
Frequency-coded quantum key distribution using amplitude-phase modulation
NASA Astrophysics Data System (ADS)
Morozov, Oleg G.; Gabdulkhakov, Il'daris M.; Morozov, Gennady A.; Zagrieva, Aida R.; Sarvarova, Lutsia M.
2016-03-01
Design principals of universal microwave photonics system for quantum key distribution with frequency coding are concerned. Its concept is based on the possibility of creating the multi-functional units to implement the most commonly used technologies of frequency coding: amplitude, phase and combined amplitude-phase modulation and re-modulation of optical carrier. The characteristics of advanced systems based on classical approaches and prospects of their development using a combination of amplitude modulation and phase commutation are discussed. These are the valuations how to build advanced systems with frequency coding quantum key distribution, including at their symmetric and asymmetric constructions, using of the mechanisms of the photon polarization states passive detection, based on the filters for wavelength division multiplexing of modulated optical carrier side components.
Ising model for distribution networks
NASA Astrophysics Data System (ADS)
Hooyberghs, H.; Van Lombeek, S.; Giuraniuc, C.; Van Schaeybroeck, B.; Indekeu, J. O.
2012-01-01
An elementary Ising spin model is proposed for demonstrating cascading failures (breakdowns, blackouts, collapses, avalanches, etc.) that can occur in realistic networks for distribution and delivery by suppliers to consumers. A ferromagnetic Hamiltonian with quenched random fields results from policies that maximize the gap between demand and delivery. Such policies can arise in a competitive market where firms artificially create new demand, or in a solidarity environment where too high a demand cannot reasonably be met. Network failure in the context of a policy of solidarity is possible when an initially active state becomes metastable and decays to a stable inactive state. We explore the characteristics of the demand and delivery, as well as the topological properties, which make the distribution network susceptible of failure. An effective temperature is defined, which governs the strength of the activity fluctuations which can induce a collapse. Numerical results, obtained by Monte Carlo simulations of the model on (mainly) scale-free networks, are supplemented with analytic mean-field approximations to the geometrical random field fluctuations and the thermal spin fluctuations. The role of hubs versus poorly connected nodes in initiating the breakdown of network activity is illustrated and related to model parameters.
Offset Manchester coding for Rayleigh noise suppression in carrier-distributed WDM-PONs
NASA Astrophysics Data System (ADS)
Xu, Jing; Yu, Xiangyu; Lu, Weichao; Qu, Fengzhong; Deng, Ning
2015-07-01
We propose a novel offset Manchester coding in upstream to simultaneously realize Rayleigh noise suppression and differential detection in a carrier-distributed wavelength division multiplexed passive optical network. Error-free transmission of 2.5-Gb/s upstream signals over 50-km standard single mode fiber is experimentally demonstrated, with a 7-dB enhanced tolerance to Rayleigh noise.
Model-free distributed learning
NASA Technical Reports Server (NTRS)
Dembo, Amir; Kailath, Thomas
1990-01-01
Model-free learning for synchronous and asynchronous quasi-static networks is presented. The network weights are continuously perturbed, while the time-varying performance index is measured and correlated with the perturbation signals; the correlation output determines the changes in the weights. The perturbation may be either via noise sources or orthogonal signals. The invariance to detailed network structure mitigates large variability between supposedly identical networks as well as implementation defects. This local, regular, and completely distributed mechanism requires no central control and involves only a few global signals. Thus it allows for integrated on-chip learning in large analog and optical networks.
Model-building codes for membrane proteins.
Shirley, David Noyes; Hunt, Thomas W.; Brown, W. Michael; Schoeniger, Joseph S.; Slepoy, Alexander; Sale, Kenneth L.; Young, Malin M.; Faulon, Jean-Loup Michel; Gray, Genetha Anne
2005-01-01
We have developed a novel approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only a sparse set of distance constraints, such as those derived from MS3-D, dipolar-EPR and FRET experiments. Algorithms have been written for searching the conformational space of membrane protein folds matching the set of distance constraints, which provides initial structures for local conformational searches. Local conformation search is achieved by optimizing these candidates against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. This results in refined helical bundles to which the interhelical loops and amino acid side-chains are added. Using a set of only 27 distance constraints extracted from the literature, our methods successfully recover the structure of dark-adapted rhodopsin to within 3.2 {angstrom} of the crystal structure.
Model-Driven Engineering of Machine Executable Code
NASA Astrophysics Data System (ADS)
Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira
Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.
Data model description for the DESCARTES and CIDER codes
Miley, T.B.; Ouderkirk, S.J.; Nichols, W.E.; Eslinger, P.W.
1993-01-01
The primary objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation dose that individuals could have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. One of the major objectives of the HEDR Project is to develop several computer codes to model the airborne releases. transport and envirorunental accumulation of radionuclides resulting from Hanford operations from 1944 through 1972. In July 1992, the HEDR Project Manager determined that the computer codes being developed (DESCARTES, calculation of environmental accumulation from airborne releases, and CIDER, dose calculations from environmental accumulation) were not sufficient to create accurate models. A team of HEDR staff members developed a plan to assure that computer codes would meet HEDR Project goals. The plan consists of five tasks: (1) code requirements definition. (2) scoping studies, (3) design specifications, (4) benchmarking, and (5) data modeling. This report defines the data requirements for the DESCARTES and CIDER codes.
Hierarchical model for distributed seismicity
Tejedor, Alejandro; Gomez, Javier B.; Pacheco, Amalio F.
2010-07-15
A cellular automata model for the interaction between seismic faults in an extended region is presented. Faults are represented by boxes formed by a different number of sites and located in the nodes of a fractal tree. Both the distribution of box sizes and the interaction between them is assumed to be hierarchical. Load particles are randomly added to the system, simulating the action of external tectonic forces. These particles fill the sites of the boxes progressively. When a box is full it topples, some of the particles are redistributed to other boxes and some of them are lost. A box relaxation simulates the occurrence of an earthquake in the region. The particle redistributions mostly occur upwards (to larger faults) and downwards (to smaller faults) in the hierarchy producing new relaxations. A simple and efficient bookkeeping of the information allows the running of systems with more than fifty million faults. This model is consistent with the definition of magnitude, i.e., earthquakes of magnitude m take place in boxes with a number of sites ten times bigger than those boxes responsible for earthquakes with a magnitude m-1 which are placed in the immediate lower level of the hierarchy. The three parameters of the model have a geometrical nature: the height or number of levels of the fractal tree, the coordination of the tree and the ratio of areas between boxes in two consecutive levels. Besides reproducing several seismicity properties and regularities, this model is used to test the performance of some precursory patterns.
Error control in the GCF: An information-theoretic model for error analysis and coding
NASA Technical Reports Server (NTRS)
Adeyemi, O.
1974-01-01
The structure of data-transmission errors within the Ground Communications Facility is analyzed in order to provide error control (both forward error correction and feedback retransmission) for improved communication. Emphasis is placed on constructing a theoretical model of errors and obtaining from it all the relevant statistics for error control. No specific coding strategy is analyzed, but references to the significance of certain error pattern distributions, as predicted by the model, to error correction are made.
Radiation transport phenomena and modeling - part A: Codes
Lorence, L.J.
1997-06-01
The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped.
Cost effectiveness of the 1995 model energy code in Massachusetts
Lucas, R.G.
1996-02-01
This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1995 Model Energy Code (MEC) building thermal-envelope requirements for single-family houses and multifamily housing units in Massachusetts. The goal was to compare the cost effectiveness of the 1995 MEC to the energy conservation requirements of the Massachusetts State Building Code-based on a comparison of the costs and benefits associated with complying with each.. This comparison was performed for three cities representing three geographical regions of Massachusetts--Boston, Worcester, and Pittsfield. The analysis was done for two different scenarios: a ``move-up`` home buyer purchasing a single-family house and a ``first-time`` financially limited home buyer purchasing a multifamily condominium unit. Natural gas, oil, and electric resistance heating were examined. The Massachusetts state code has much more stringent requirements if electric resistance heating is used rather than other heating fuels and/or equipment types. The MEC requirements do not vary by fuel type. For single-family homes, the 1995 MEC has requirements that are more energy-efficient than the non-electric resistance requirements of the current state code. For multifamily housing, the 1995 MEC has requirements that are approximately equally energy-efficient to the non-electric resistance requirements of the current state code. The 1995 MEC is generally not more stringent than the electric resistance requirements of the state code, in fact; for multifamily buildings the 1995 MEC is much less stringent.
Software Model Checking of ARINC-653 Flight Code with MCP
NASA Technical Reports Server (NTRS)
Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud
2010-01-01
The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.
Performance and Application of Parallel OVERFLOW Codes on Distributed and Shared Memory Platforms
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Rizk, Yehia M.
1999-01-01
The presentation discusses recent studies on the performance of the two parallel versions of the aerodynamics CFD code, OVERFLOW_MPI and _MLP. Developed at NASA Ames, the serial version, OVERFLOW, is a multidimensional Navier-Stokes flow solver based on overset (Chimera) grid technology. The code has recently been parallelized in two ways. One is based on the explicit message-passing interface (MPI) across processors and uses the _MPI communication package. This approach is primarily suited for distributed memory systems and workstation clusters. The second, termed the multi-level parallel (MLP) method, is simple and uses shared memory for all communications. The _MLP code is suitable on distributed-shared memory systems. For both methods, the message passing takes place across the processors or processes at the advancement of each time step. This procedure is, in effect, the Chimera boundary conditions update, which is done in an explicit "Jacobi" style. In contrast, the update in the serial code is done in more of the "Gauss-Sidel" fashion. The programming efforts for the _MPI code is more complicated than for the _MLP code; the former requires modification of the outer and some inner shells of the serial code, whereas the latter focuses only on the outer shell of the code. The _MPI version offers a great deal of flexibility in distributing grid zones across a specified number of processors in order to achieve load balancing. The approach is capable of partitioning zones across multiple processors or sending each zone and/or cluster of several zones into a single processor. The message passing across the processors consists of Chimera boundary and/or an overlap of "halo" boundary points for each partitioned zone. The MLP version is a new coarse-grain parallel concept at the zonal and intra-zonal levels. A grouping strategy is used to distribute zones into several groups forming sub-processes which will run in parallel. The total volume of grid points in each
General Description of Fission Observables: GEF Model Code
NASA Astrophysics Data System (ADS)
Schmidt, K.-H.; Jurado, B.; Amouroux, C.; Schmitt, C.
2016-01-01
The GEF ("GEneral description of Fission observables") model code is documented. It describes the observables for spontaneous fission, neutron-induced fission and, more generally, for fission of a compound nucleus from any other entrance channel, with given excitation energy and angular momentum. The GEF model is applicable for a wide range of isotopes from Z = 80 to Z = 112 and beyond, up to excitation energies of about 100 MeV. The results of the GEF model are compared with fission barriers, fission probabilities, fission-fragment mass- and nuclide distributions, isomeric ratios, total kinetic energies, and prompt-neutron and prompt-gamma yields and energy spectra from neutron-induced and spontaneous fission. Derived properties of delayed neutrons and decay heat are also considered. The GEF model is based on a general approach to nuclear fission that explains a great part of the complex appearance of fission observables on the basis of fundamental laws of physics and general properties of microscopic systems and mathematical objects. The topographic theorem is used to estimate the fission-barrier heights from theoretical macroscopic saddle-point and ground-state masses and experimental ground-state masses. Motivated by the theoretically predicted early localisation of nucleonic wave functions in a necked-in shape, the properties of the relevant fragment shells are extracted. These are used to determine the depths and the widths of the fission valleys corresponding to the different fission channels and to describe the fission-fragment distributions and deformations at scission by a statistical approach. A modified composite nuclear-level-density formula is proposed. It respects some features in the superfluid regime that are in accordance with new experimental findings and with theoretical expectations. These are a constant-temperature behaviour that is consistent with a considerably increased heat capacity and an increased pairing condensation energy that is
Differences between the 1992 and 1993 CABO Model Energy Codes
Conover, D.R.; Lucas, R.G.
1995-01-01
This report is one in a series of documents describing research activities in support of the US Department of Energy (DOE) Building Energy Standards Program. The Pacific Northwest Laboratory (PNL) leads the program for DOE. The goal of the Program is to develop and encourage the implementation Of Performance standards to achieve the maximum practicable energy efficiency in the design of new buildings. The program approach to meeting the goal is to initiate and manage individual research and standards and guidelines development efforts that are planned and conducted in cooperation with representatives from throughout the buildings community. Projects under way involve practicing architects and engineers, Professional societies and code organizations, industry representatives, and researchers from the private sector and national laboratories. Research results and technical justifications for standards criteria are provided to standards development and model code organizations and to Federal, State, and local jurisdictions as a basis to update their codes and standards. This effort helps to ensure that building standards incorporate the latest research results to achieve maximum energy savings in new buildings, Yet remain responsive to the needs of the affected professions, organizations, and jurisdictions. Our efforts also support the implementation, deployment, and use of energy-efficient codes and standards. This report identifies the differences between the 1992 and 1993 editions of the Council of American Building Officials, (CABO) Model Energy Code (MEC) and briefly highlights the technical and administrative impacts of these changes.
Testing geochemical modeling codes using New Zealand hydrothermal systems
Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.
1993-12-01
Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of selected portions of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will: (1) ensure that we are providing adequately for all significant processes occurring in natural systems; (2) determine the adequacy of the mathematical descriptions of the processes; (3) check the adequacy and completeness of thermodynamic data as a function of temperature for solids, aqueous species and gases; and (4) determine the sensitivity of model results to the manner in which the problem is conceptualized by the user and then translated into constraints in the code input. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions. The kinetics of silica precipitation in EQ6 will be tested using field data from silica-lined drain channels carrying hot water away from the Wairakei borefield.
A hybrid quantum key distribution protocol based on extended unitary operations and fountain codes
NASA Astrophysics Data System (ADS)
Lai, Hong; Xue, Liyin; Orgun, Mehmet A.; Xiao, Jinghua; Pieprzyk, Josef
2015-02-01
In 1984, Bennett and Brassard designed the first quantum key distribution protocol, whose security is based on quantum indeterminacy. Since then, there has been growing research activities, aiming in designing new, more efficient and secure key distribution protocols. The work presents a novel hybrid quantum key distribution protocol. The key distribution is derived from both quantum and classical data. This is why it is called hybrid. The protocol applies extended unitary operations derived from four basic unitary operations and distributed fountain codes. Compared to other protocols published so far, the new one is more secure (provides authentication of parties and detection of eavesdropping) and efficient. Moreover, our protocol still works over noisy and lossy channels.
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Gould, R. K.; Srivastava, R.
1979-01-01
Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.
Development of a fan model for the CONTAIN code
Pevey, R.E.
1987-01-08
A fan model has been added to the CONTAIN code with a minimum of disruption of the standard CONTAIN calculation sequence. The user is required to supply a simple pressure vs. flow rate curve for each fan in his model configuration. Inclusion of the fan model required modification to two CONTAIN subroutines, IFLOW and EXEQNX. The two modified routines and the resulting executable module are located on the LANL mass storage system as /560007/iflow, /560007/exeqnx, and /560007/cont01, respectively. The model has been initially validated using a very simple sample problem and is ready for a more complete workout using the SRP reactor models from the RSRD probabilistic risk analysis.
Hierarchical model for distributed seismicity.
Tejedor, Alejandro; Gómez, Javier B; Pacheco, Amalio F
2010-07-01
A cellular automata model for the interaction between seismic faults in an extended region is presented. Faults are represented by boxes formed by a different number of sites and located in the nodes of a fractal tree. Both the distribution of box sizes and the interaction between them is assumed to be hierarchical. Load particles are randomly added to the system, simulating the action of external tectonic forces. These particles fill the sites of the boxes progressively. When a box is full it topples, some of the particles are redistributed to other boxes and some of them are lost. A box relaxation simulates the occurrence of an earthquake in the region. The particle redistributions mostly occur upwards (to larger faults) and downwards (to smaller faults) in the hierarchy producing new relaxations. A simple and efficient bookkeeping of the information allows the running of systems with more than fifty million faults. This model is consistent with the definition of magnitude, i.e., earthquakes of magnitude m take place in boxes with a number of sites ten times bigger than those boxes responsible for earthquakes with a magnitude m-1 which are placed in the immediate lower level of the hierarchy. The three parameters of the model have a geometrical nature: the height or number of levels of the fractal tree, the coordination of the tree and the ratio of areas between boxes in two consecutive levels. Besides reproducing several seismicity properties and regularities, this model is used to test the performance of some precursory patterns. PMID:20866700
Self-shielding models of MICROX-2 code
Hou, J.; Ivanov, K.; Choi, H.
2013-07-01
The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. In the previous study, a new fine-group cross section library of the MICROX-2 was generated and tested against reference calculations and measurement data. In this study, existing physics models of the MICROX-2 are reviewed and updated to improve the physics calculation performance of the MICROX-2 code, including the resonance self-shielding model and spatial self-shielding factor. The updated self-shielding models have been verified through a series of benchmark calculations against the Monte Carlo code, using homogeneous and pin cell models selected for this study. The results have shown that the updates of the self-shielding factor calculation model are correct and improve the physics calculation accuracy even though the magnitude of error reduction is relatively small. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by approximately 0.1 % and 0.2% for the homogeneous and pin cell models, respectively, considered in this study. (authors)
Modeling of Anomalous Transport in Tokamaks with FACETS code
NASA Astrophysics Data System (ADS)
Pankin, A. Y.; Batemann, G.; Kritz, A.; Rafiq, T.; Vadlamani, S.; Hakim, A.; Kruger, S.; Miah, M.; Rognlien, T.
2009-05-01
The FACETS code, a whole-device integrated modeling code that self-consistently computes plasma profiles for the plasma core and edge in tokamaks, has been recently developed as a part of the SciDAC project for core-edge simulations. A choice of transport models is available in FACETS through the FMCFM interface [1]. Transport models included in FMCFM have specific ranges of applicability, which can limit their use to parts of the plasma. In particular, the GLF23 transport model does not include the resistive ballooning effects that can be important in the tokamak pedestal region and GLF23 typically under-predicts the anomalous fluxes near the magnetic axis [2]. The TGLF and GYRO transport models have similar limitations [3]. A combination of transport models that covers the entire discharge domain is studied using FACETS in a realistic tokamak geometry. Effective diffusivities computed with the FMCFM transport models are extended to the region near the separatrix to be used in the UEDGE code within FACETS. 1. S. Vadlamani et al. (2009) %First time-dependent transport simulations using GYRO and NCLASS within FACETS (this meeting).2. T. Rafiq et al. (2009) %Simulation of electron thermal transport in H-mode discharges Submitted to Phys. Plasmas.3. C. Holland et al. (2008) %Validation of gyrokinetic transport simulations using %DIII-D core turbulence measurements Proc. of IAEA FEC (Switzerland, 2008)
Non-contact assessment of melanin distribution via multispectral temporal illumination coding
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).
Modeling of the EAST ICRF antenna with ICANT Code
Qin Chengming; Zhao Yanping; Colas, L.; Heuraux, S.
2007-09-28
A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.
Modeling of the EAST ICRF antenna with ICANT Code
NASA Astrophysics Data System (ADS)
Qin, Chengming; Zhao, Yanping; Colas, L.; Heuraux, S.
2007-09-01
A Resonant Double Loop (RDL) antenna for ion-cyclotron range of frequencies (ICRF) on Experimental Advanced Superconducting Tokamak (EAST) is under construction. The new antenna is analyzed using the antenna coupling code ICANT which self-consistently determines the surface currents on all antenna parts. In this work, the modeling of the new ICRF antenna using this code is to assess the near-fields in front of the antenna and analysis its coupling capabilities. Moreover, the antenna reactive radiated power computed by ICANT and shows a good agreement with deduced from Transmission Line (TL) theory.
A compressible Navier-Stokes code for turbulent flow modeling
NASA Technical Reports Server (NTRS)
Coakley, T. J.
1984-01-01
An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.
Thermohydraulic modeling of nuclear thermal rockets: The KLAXON code
Hall, M.L.; Rider, W.J.; Cappiello, M.W.
1992-07-01
The hydrogen flow from the storage tanks, through the reactor core, and out the nozzle of a Nuclear Thermal Rocket is an integral design consideration. To provide an analysis and design tool for this phenomenon, the KLAXON code is being developed. A shock-capturing numerical methodology is used to model the gas flow (the Harten, Lax, and van Leer method, as implemented by Einfeldt). Preliminary results of modeling the flow through the reactor core and nozzle are given in this paper.
A semianalytic Monte Carlo code for modelling LIDAR measurements
NASA Astrophysics Data System (ADS)
Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio
2007-10-01
LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.
2007-07-09
Version 02 PRECO-2006 is a two-component exciton model code for the calculation of double differential cross sections of light particle nuclear reactions. PRECO calculates the emission of light particles (A = 1 to 4) from nuclear reactions induced by light particles on a wide variety of target nuclei. Their distribution in both energy and angle is calculated. Since it currently only considers the emission of up to two particles in any given reaction, it ismore » most useful for incident energies of 14 to 30 MeV when used as a stand-alone code. However, the preequilibrium calculations are valid up to at least around 100 MeV, and these can be used as input for more complete evaporation calculations, such as are performed in a Hauser-Feshbach model code. Finally, the production cross sections for specific product nuclides can be obtained« less
NASA Technical Reports Server (NTRS)
Artley, J. A. (Principal Investigator)
1981-01-01
The Hodges-Artley spring small grains planting date distribution model was coded in FORTRAN. The PLDRVR program, which implements the model, is described and a copy of the code is provided. The purpose, calling procedure, local variables, and input/output devices for each subroutine are explained to supplement the user's guide.
Further results on fault-tolerant distributed classification using error-correcting codes
NASA Astrophysics Data System (ADS)
Wang, Tsang-Yi; Han, Yunghsiang S.; Varshney, Pramod K.
2004-04-01
In this paper, we consider the distributed classification problem in wireless sensor networks. The DCFECC-SD approach employing the binary code matrix has recently been proposed to cope with the errors caused by both sensor faults and the effect of fading channels. The DCFECC-SD approach extends the DCFECC approach by using soft decision decoding to combat channel fading. However, the performance of the system employing the binary code matrix could be degraded if the distance between different hypotheses can not be kept large. This situation could happen when the number of sensor is small or the number of hypotheses is large. In this paper, we design the DCFECC-SD approach employing the D-ary code matrix, where D>2. Simulation results show that the performance of the DCFECC-SD approach employing the D-ary code matrix is better than that of the DCFECC-SD approach employing the binary code matrix. Performance evaluation of DCFECC-SD using different number of bits of local decision information is also provided when the total channel energy output from each sensor node is fixed.
Examination of nanoparticle dispersion using a novel GPU based radial distribution function code
NASA Astrophysics Data System (ADS)
Rosch, Thomas; Wade, Matthew; Phelan, Frederick
We have developed a novel GPU-based code that rapidly calculates radial distribution function (RDF) for an entire system, with no cutoff, ensuring accuracy. Built on top of this code, we have developed tools to calculate the second virial coefficient (B2) and the structure factor from the RDF, two properties that are directly related to the dispersion of nanoparticles in nancomposite systems. We validate the RDF calculations by comparison with previously published results, and also show how our code, which takes into account bonding in polymeric systems, enables more accurate predictions of g(r) than current state of the art GPU-based RDF codes currently available for these systems. In addition, our code reduces the computational time by approximately an order of magnitude compared to CPU-based calculations. We demonstrate the application of our toolset by the examination of a coarse-grained nanocomposite system and show how different surface energies between particle and polymer lead to different dispersion states, and effect properties such as viscosity, yield strength, elasticity, and thermal conductivity.
Enhancements to the SSME transfer function modeling code
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Mitchell, Jerrel R.; Bartholomew, David L.; Glenn, Russell D.
1995-01-01
This report details the results of a one year effort by Ohio University to apply the transfer function modeling and analysis tools developed under NASA Grant NAG8-167 (Irwin, 1992), (Bartholomew, 1992) to attempt the generation of Space Shuttle Main Engine High Pressure Turbopump transfer functions from time domain data. In addition, new enhancements to the transfer function modeling codes which enhance the code functionality are presented, along with some ideas for improved modeling methods and future work. Section 2 contains a review of the analytical background used to generate transfer functions with the SSME transfer function modeling software. Section 2.1 presents the 'ratio method' developed for obtaining models of systems that are subject to single unmeasured excitation sources and have two or more measured output signals. Since most of the models developed during the investigation use the Eigensystem Realization Algorithm (ERA) for model generation, Section 2.2 presents an introduction of ERA, and Section 2.3 describes how it can be used to model spectral quantities. Section 2.4 details the Residue Identification Algorithm (RID) including the use of Constrained Least Squares (CLS) and Total Least Squares (TLS). Most of this information can be found in the report (and is repeated for convenience). Section 3 chronicles the effort of applying the SSME transfer function modeling codes to the a51p394.dat and a51p1294.dat time data files to generate transfer functions from the unmeasured input to the 129.4 degree sensor output. Included are transfer function modeling attempts using five methods. The first method is a direct application of the SSME codes to the data files and the second method uses the underlying trends in the spectral density estimates to form transfer function models with less clustering of poles and zeros than the models obtained by the direct method. In the third approach, the time data is low pass filtered prior to the modeling process in an
The modelling of wall condensation with noncondensable gases for the containment codes
Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H.
1995-09-01
This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.
Using cryptology models for protecting PHP source code
NASA Astrophysics Data System (ADS)
Jevremović, Aleksandar; Ristić, Nenad; Veinović, Mladen
2013-10-01
Protecting PHP scripts from unwanted use, copying and modifications is a big issue today. Existing solutions on source code level are mostly working as obfuscators, they are free, and they are not providing any serious protection. Solutions that encode opcode are more secure, but they are commercial and require closed-source proprietary PHP interpreter's extension. Additionally, encoded opcode is not compatible with future versions of interpreters which imply re-buying encoders from the authors. Finally, if extension source-code is compromised, all scripts encoded with that solution are compromised too. In this paper, we will present a new model for free and open-source PHP script protection solution. Protection level provided by the proposed solution is equal to protection level of commercial solutions. Model is based on conclusions from use of standard cryptology models for analysis of strengths and weaknesses of the existing solutions, when a scripts protection is seen as secure communication channel in the cryptology.
Kawamura, E.; Verboncoeur, J.P.; Birdsall, C.K.
1996-12-31
The goal is to obtain the ion angular and energy distributions at the wafer of inductive and capacitive discharges. To do this on a standard uniform mesh with particle codes alone would be impractical because of the long time scale nature of the problem (i.e., 10{sup 6} time steps). A solution is to use a fluid code to simulate the bulk source region, while using a particle-in-cell code to simulate the sheath region. Induct95 is a 2d fluid code which can simulate inductive and capacitive discharges. Though it does not resolve the sheath region near the wafer, it provides diagnostics for the collisional bulk plasma (i.e., potentials, temperatures, fluxes, etc.). Also, fluid codes converge to equilibrium much faster than particle codes in collisional regimes PDP1 is a 1d3v particle-in-cell code which can simulate rf discharges. It can resolve the sheath region and obtain the ion angular and energy distributions at the wafer target. The overall running time is expected to be that of the fluid code.
Ralchenko, Yu.; Abdallah, J. Jr.; Colgan, J.; Fontes, C. J.; Foster, M.; Zhang, H. L.; Bar-Shalom, A.; Oreg, J.; Bauche, J.; Bauche-Arnoult, C.; Bowen, C.; Faussurier, G.; Chung, H.-K.; Hansen, S. B.; Lee, R. W.; Scott, H.; Gaufridy de Dortan, F. de; Poirier, M.; Golovkin, I.; Novikov, V.
2009-09-10
We present calculations of ionization balance and radiative power losses for tungsten in magnetic fusion plasmas. The simulations were performed within the framework of Non-Local Thermodynamic Equilibrium (NLTE) Code Comparison Workshops utilizing several independent collisional-radiative models. The calculations generally agree with each other; however, a clear disagreement with experimental ionization distributions at low temperatures 2 keV
Development of a model and computer code to describe solar grade silicon production processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
Mathematical models, and computer codes based on these models were developed which allow prediction of the product distribution in chemical reactors in which gaseous silicon compounds are converted to condensed phase silicon. The reactors to be modeled are flow reactors in which silane or one of the halogenated silanes is thermally decomposed or reacted with an alkali metal, H2 or H atoms. Because the product of interest is particulate silicon, processes which must be modeled, in addition to mixing and reaction of gas-phase reactants, include the nucleation and growth of condensed Si via coagulation, condensation, and heterogeneous reaction.
New Mechanical Model for the Transmutation Fuel Performance Code
Gregory K. Miller
2008-04-01
A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.
Kim, Steve M.; Ganguli, Surya; Frank, Loren M.
2012-01-01
Hippocampal place cells convey spatial information through a combination of spatially-selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit. PMID:22915100
The WARP Code: Modeling High Intensity Ion Beams
Grote, D P; Friedman, A; Vay, J L; Haber, I
2004-12-09
The Warp code, developed for heavy-ion driven inertial fusion energy studies, is used to model high intensity ion (and electron) beams. Significant capability has been incorporated in Warp, allowing nearly all sections of an accelerator to be modeled, beginning with the source. Warp has as its core an explicit, three-dimensional, particle-in-cell model. Alongside this is a rich set of tools for describing the applied fields of the accelerator lattice, and embedded conducting surfaces (which are captured at sub-grid resolution). Also incorporated are models with reduced dimensionality: an axisymmetric model and a transverse ''slice'' model. The code takes advantage of modern programming techniques, including object orientation, parallelism, and scripting (via Python). It is at the forefront in the use of the computational technique of adaptive mesh refinement, which has been particularly successful in the area of diode and injector modeling, both steady-state and time-dependent. In the presentation, some of the major aspects of Warp will be overviewed, especially those that could be useful in modeling ECR sources. Warp has been benchmarked against both theory and experiment. Recent results will be presented showing good agreement of Warp with experimental results from the STS500 injector test stand. Additional information can be found on the web page http://hif.lbl.gov/theory/WARP{_}summary.html.
Development Of Sputtering Models For Fluids-Based Plasma Simulation Codes
NASA Astrophysics Data System (ADS)
Veitzer, Seth; Beckwith, Kristian; Stoltz, Peter
2015-09-01
Rf-driven plasma devices such as ion sources and plasma processing devices for many industrial and research applications benefit from detailed numerical modeling. Simulation of these devices using explicit PIC codes is difficult due to inherent separations of time and spatial scales. One alternative type of model is fluid-based codes coupled with electromagnetics, that are applicable to modeling higher-density plasmas in the time domain, but can relax time step requirements. To accurately model plasma-surface processes, such as physical sputtering and secondary electron emission, kinetic particle models have been developed, where particles are emitted from a material surface due to plasma ion bombardment. In fluid models plasma properties are defined on a cell-by-cell basis, and distributions for individual particle properties are assumed. This adds a complexity to surface process modeling, which we describe here. We describe the implementation of sputtering models into the hydrodynamic plasma simulation code USim, as well as methods to improve the accuracy of fluids-based simulation of plasmas-surface interactions by better modeling of heat fluxes. This work was performed under the auspices of the Department of Energy, Office of Basic Energy Sciences Award #DE-SC0009585.
A cumulative entropy method for distribution recognition of model error
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2015-02-01
This paper develops a cumulative entropy method (CEM) to recognize the most suitable distribution for model error. In terms of the CEM, the Lévy stable distribution is employed to capture the statistical properties of model error. The strategies are tested on 250 experiments of axially loaded CFT steel stub columns in conjunction with the four national building codes of Japan (AIJ, 1997), China (DL/T, 1999), the Eurocode 4 (EU4, 2004), and United States (AISC, 2005). The cumulative entropy method is validated as more computationally efficient than the Shannon entropy method. Compared with the Kolmogorov-Smirnov test and root mean square deviation, the CEM provides alternative and powerful model selection criterion to recognize the most suitable distribution for the model error.
Reliability of Calderbank Shor Steane codes and security of quantum key distribution
NASA Astrophysics Data System (ADS)
Hamada, Mitsuru
2004-08-01
After Mayers (1996 Advances in Cryptography: Proc. Crypto'96 pp 343-57 2001 J. Assoc. Comput. Mach. 48 351-406) gave a proof of the security of the Bennett-Brassard (1984 Proc. IEEE Int. Conf. on Computers, Systems and Signal Processing (Bangalore, India) pp 175-9) (BB84) quantum key distribution protocol, Shor and Preskill (2000 Phys. Rev. Lett. 85 441-4) made a remarkable observation that a Calderbank-Shor-Steane (CSS) code had been implicitly used in the BB84 protocol, and suggested its security could be proved by bounding the fidelity, say Fn, of the incorporated CSS code of length n in the form 1-F_n \\le \\exp[-n E \\ {+}\\ o(n)] for some positive number E. This work presents such a number E = E(R) as a function of the rate of codes R, and a threshold R0 such that E(R) > 0 whenever R < R0, which is larger than the achievable rate based on the Gilbert-Varshamov bound that is essentially given by Shor and Preskill. The codes in the present work are robust against fluctuations of channel parameters, which fact is needed to establish the security rigorously and was not proved for rates above the Gilbert-Varshamov rate before in the literature. As a byproduct, the security of a modified BB84 protocol against any joint (coherent) attacks is proved quantitatively.
Current Capabilities of the Fuel Performance Modeling Code PARFUME
G. K. Miller; D. A. Petti; J. T. Maki; D. L. Knudson
2004-09-01
The success of gas reactors depends upon the safety and quality of the coated particle fuel. A fuel performance modeling code (called PARFUME), which simulates the mechanical and physico-chemical behavior of fuel particles during irradiation, is under development at the Idaho National Engineering and Environmental Laboratory. Among current capabilities in the code are: 1) various options for calculating CO production and fission product gas release, 2) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 3) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, kernel migration, and thinning of the SiC caused by interaction of fission products with the SiC, 4) two independent methods for determining particle failure probabilities, 5) a model for calculating release-to-birth (R/B) ratios of gaseous fission products, that accounts for particle failures and uranium contamination in the fuel matrix, and 6) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. This paper presents an overview of the code.
Film grain noise modeling in advanced video coding
NASA Astrophysics Data System (ADS)
Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin
2007-01-01
A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.
Time-dependent recycling modeling with edge plasma transport codes
NASA Astrophysics Data System (ADS)
Pigarov, A.; Krasheninnikov, S.; Rognlien, T.; Taverniers, S.; Hollmann, E.
2013-10-01
First,we discuss extensions to Macroblob approach which allow to simulate more accurately dynamics of ELMs, pedestal and edge transport with UEDGE code. Second,we present UEDGE modeling results for H mode discharge with infrequent ELMs and large pedestal losses on DIII-D. In modeled sequence of ELMs this discharge attains a dynamic equilibrium. Temporal evolution of pedestal plasma profiles, spectral line emission, and surface temperature matching experimental data over ELM cycle is discussed. Analysis of dynamic gas balance highlights important role of material surfaces. We quantified the wall outgassing between ELMs as 3X the NBI fueling and the recycling coefficient as 0.8 for wall pumping via macroblob-wall interactions. Third,we also present results from multiphysics version of UEDGE with built-in, reduced, 1-D wall models and analyze the role of various PMI processes. Progress in framework-coupled UEDGE/WALLPSI code is discussed. Finally, implicit coupling schemes are important feature of multiphysics codes and we report on the results of parametric analysis of convergence and performance for Picard and Newton iterations in a system of coupled deterministic-stochastic ODE and proposed modifications enhancing convergence.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Toward a Probabilistic Automata Model of Some Aspects of Code-Switching.
ERIC Educational Resources Information Center
Dearholt, D. W.; Valdes-Fallis, G.
1978-01-01
The purpose of the model is to select either Spanish or English as the language to be used; its goals at this stage of development include modeling code-switching for lexical need, apparently random code-switching, dependency of code-switching upon sociolinguistic context, and code-switching within syntactic constraints. (EJS)
Partially Key Distribution with Public Key Cryptosystem Based on Error Control Codes
NASA Astrophysics Data System (ADS)
Tavallaei, Saeed Ebadi; Falahati, Abolfazl
Due to the low level of security in public key cryptosystems based on number theory, fundamental difficulties such as "key escrow" in Public Key Infrastructure (PKI) and a secure channel in ID-based cryptography, a new key distribution cryptosystem based on Error Control Codes (ECC) is proposed . This idea is done by some modification on McEliece cryptosystem. The security of ECC cryptosystem obtains from the NP-Completeness of block codes decoding. The capability of generating public keys with variable lengths which is suitable for different applications will be provided by using ECC. It seems that usage of these cryptosystems because of decreasing in the security of cryptosystems based on number theory and increasing the lengths of their keys would be unavoidable in future.
Quinlan, D; Barany, G; Panas, T
2007-08-30
Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.
Direct containment heating models in the CONTAIN code
Washington, K.E.; Williams, D.C.
1995-08-01
The potential exists in a nuclear reactor core melt severe accident for molten core debris to be dispersed under high pressure into the containment building. If this occurs, the set of phenomena that result in the transfer of energy to the containment atmosphere and its surroundings is referred to as direct containment heating (DCH). Because of the potential for DCH to lead to early containment failure, the U.S. Nuclear Regulatory Commission (USNRC) has sponsored an extensive research program consisting of experimental, analytical, and risk integration components. An important element of the analytical research has been the development and assessment of direct containment heating models in the CONTAIN code. This report documents the DCH models in the CONTAIN code. DCH models in CONTAIN for representing debris transport, trapping, chemical reactions, and heat transfer from debris to the containment atmosphere and surroundings are described. The descriptions include the governing equations and input instructions in CONTAIN unique to performing DCH calculations. Modifications made to the combustion models in CONTAIN for representing the combustion of DCH-produced and pre-existing hydrogen under DCH conditions are also described. Input table options for representing the discharge of debris from the RPV and the entrainment phase of the DCH process are also described. A sample calculation is presented to demonstrate the functionality of the models. The results show that reasonable behavior is obtained when the models are used to predict the sixth Zion geometry integral effects test at 1/10th scale.
Finite element code development for modeling detonation of HMX composites
NASA Astrophysics Data System (ADS)
Duran, Adam; Sundararaghavan, Veera
2015-06-01
In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.
Plasma injection and atomic physics models for use in particle simulation codes
Procassini, R.J. California Univ., Berkeley, CA . Electronics Research Lab.)
1991-06-12
Models of plasma injection (creation) and charged/neutral atomic physics which are suitable for incorporation into particle simulation codes are described. Both planar and distributed source injection models are considered. Results obtained from planar injection into a collisionless plasma-sheath region are presented. The atomic physics package simulates the charge exchange and impact ionization interactions which occur between charged particles and neutral atoms in a partially-ionized plasma. These models are applicable to a wide range of problems, from plasma processing of materials to transport in the edge region of a tokamak plasma. 18 refs., 6 figs.
LineCast: line-based distributed coding and transmission for broadcasting satellite images.
Wu, Feng; Peng, Xiulian; Xu, Jizheng
2014-03-01
In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction. PMID:24474371
Temporal perceptual coding using a visual acuity model
NASA Astrophysics Data System (ADS)
Adzic, Velibor; Cohen, Robert A.; Vetro, Anthony
2014-02-01
This paper describes research and results in which a visual acuity (VA) model of the human visual system (HVS) is used to reduce the bitrate of coded video sequences, by eliminating the need to signal transform coefficients when their corresponding frequencies will not be detected by the HVS. The VA model is integrated into the state of the art HEVC HM codec. Compared to the unmodified codec, up to 45% bitrate savings are achieved while maintaining the same subjective quality of the video sequences. Encoding times are reduced as well.
Systematic effects in CALOR simulation code to model experimental configurations
Job, P.K.; Proudfoot, J. ); Handler, T. . Dept. of Physics and Astronomy); Gabriel, T.A. )
1991-03-27
CALOR89 code system is being used to simulate test beam results and the design parameters of several calorimeter configurations. It has been bench-marked against the ZEUS, D{theta} and HELIOS data. This study identifies the systematic effects in CALOR simulation to model the experimental configurations. Five major systematic effects are identified. These are the choice of high energy nuclear collision model, material composition, scintillator saturation, shower integration time, and the shower containment. Quantitative estimates of these systematic effects are presented. 23 refs., 6 figs., 7 tabs.
Aydogan, Fatih; Hochreiter, Lawrence E.; Ivanov, Kostadin; Rhee, Gene; Sartori, Enrico
2006-07-01
Good quality experimental data is needed to refine the thermal hydraulic models for the prediction of rod bundle void distribution and critical heat flux (CHF) or dry-out. The Nuclear Power Engineering Corporation (NUPEC) has provided a valuable database to evaluate the thermal hydraulic codes [1]. Part of this database was selected for the NUPEC BWR Full-size Fine-Mesh Bundle Tests (BFBT) benchmark sponsored by US NRC, METI-Japan, NEA/OECD and Nuclear Engineering Program of the Pennsylvania State University (PSU). Twenty-five organizations from ten countries have confirmed their intention to participate and will provide code predictions to be compared to the measured data for a series of defined exercises within the framework of the BFBT benchmark. This benchmark data includes both the fine-mesh high quality sub-channel void fraction and critical power data. Using a full BWR rod bundle test facility, the void distribution was measured at mesh sizes smaller than the sub-channel by using a state-of-the-art computer tomography (CT) technology [1]. Experiments were performed for different pressures, flow rates, exit qualities, inlet sub-cooling, power distributions, spacer types and assembly designs. There are microscopic and sub-channel averaged void fraction data from the CT scanner at the bundle exit as well as X-ray densitometer void distribution data at different elevation levels in the rod bundle. Each sub-channel's loss coefficient was calculated with using the Rehme method [2,3], and a COBRA-TF sub-channel model was developed for the NUPEC facility. The BWR assembly that was modeled with COBRA-TF includes two water rods at the center. The predicted sub-channel void fraction values from COBRA-TF are compared with the bundle exit void fraction values measured using the CT-scanner void fraction from the BFBT benchmark data. Different plots are used to examine the code prediction of the void distribution at a sub-channel level for the different sub-channels within
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.
Guo, Fei; Li, Xin; Liu, Wanke
2016-01-01
The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model
Guo, Fei; Li, Xin; Liu, Wanke
2016-01-01
The BeiDou satellite-induced code biases have been confirmed to be orbit type-, frequency-, and elevation-dependent. Such code-phase divergences (code bias variations) severely affect absolute precise applications which use code measurements. To reduce their adverse effects, an improved correction model is proposed in this paper. Different from the model proposed by Wanninger and Beer (2015), more datasets (a time span of almost two years) were used to produce the correction values. More importantly, the stochastic information, i.e., the precision indexes, were given together with correction values in the improved model. However, only correction values were given while the precision indexes were completely missing in the traditional model. With the improved correction model, users may have a better understanding of their corrections, especially the uncertainty of corrections. Thus, it is helpful for refining the stochastic model of code observations. Validation tests in precise point positioning (PPP) reveal that a proper stochastic model is critical. The actual precision of the corrected code observations can be reflected in a more objective manner if the stochastic model of the corrections is taken into account. As a consequence, PPP solutions with the improved model outperforms the traditional one in terms of positioning accuracy, as well as convergence speed. In addition, the Melbourne-Wübbena (MW) combination which serves for ambiguity fixing were verified as well. The uncorrected MW values show strong systematic variations with an amplitude of half a wide-lane cycle, which prevents precise ambiguity determination and successful ambiguity resolution. After application of the code bias correction models, the systematic variations can be greatly removed, and the resulting wide lane ambiguities are more likely to be fixed. Moreover, the code residuals show more reasonable distributions after code bias corrections with either the traditional or the improved model
Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding
NASA Astrophysics Data System (ADS)
Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying
2016-02-01
We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.
MMA, A Computer Code for Multi-Model Analysis
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
Review of the ALOHA code pool evaporation model
Kalinich, D.A.
1995-11-01
The ALOHA computer code determines the evaporative mass transfer rate from a liquid pool by solving the conservation of mass and energy equations associated with the pool. As part of the solution of the conservation of energy equation, the heat flux from the ground to the pool is calculated. The model used in the ALOHA code is based on the solution of the temperature profile for a one-dimensional semi-infinite slab. This model is only valid for cases in which the boundary condition (pool temperature) is held constant. Thus, when the pool material temperature is not constant, the ALOHA ground-to-pool heat flux calculation may result in a non-conservative evaporation rate. The analytical solution for the temperature profile of a one-dimensional semi-infinite slab with a time-dependent boundary condition requires a priori knowledge of the boundary condition. Lacking such knowledge, a time-dependent finite-difference solution for the ground temperature profile was developed. The temperature gradient, and thus the ground-to-pool heat flux, at the ground-pool interface is determined from the results of the finite-difference solution. The evaporation rates over the conditions sampled using the ALOHA ground-to-pool heat flux model were up to 15% lower than those generated when the finite-difference model to calculate ground-to-pool heat flux. Overall ALOHA code estimates may compensate by judicious selection of input parameters and assumptions. Application to safety analyses thus must be performed cautiously to ensure that estimated chemical source term and its attendant downwind concentrations are bounding.
A model code for the radiative theta pinch
Lee, S.; Saw, S. H.; Lee, P. C. K.; Akel, M.; Damideh, V.; Khattak, N. A. D.; Mongkolnavin, R.; Paosawatyanyong, B.
2014-07-15
A model for the theta pinch is presented with three modelled phases of radial inward shock phase, reflected shock phase, and a final pinch phase. The governing equations for the phases are derived incorporating thermodynamics and radiation and radiation-coupled dynamics in the pinch phase. A code is written incorporating correction for the effects of transit delay of small disturbing speeds and the effects of plasma self-absorption on the radiation. Two model parameters are incorporated into the model, the coupling coefficient f between the primary loop current and the induced plasma current and the mass swept up factor f{sub m}. These values are taken from experiments carried out in the Chulalongkorn theta pinch.
Distance distribution in configuration-model networks
NASA Astrophysics Data System (ADS)
Nitzan, Mor; Katzav, Eytan; Kühn, Reimer; Biham, Ofer
2016-06-01
We present analytical results for the distribution of shortest path lengths between random pairs of nodes in configuration model networks. The results, which are based on recursion equations, are shown to be in good agreement with numerical simulations for networks with degenerate, binomial, and power-law degree distributions. The mean, mode, and variance of the distribution of shortest path lengths are also evaluated. These results provide expressions for central measures and dispersion measures of the distribution of shortest path lengths in terms of moments of the degree distribution, illuminating the connection between the two distributions.
NASA Astrophysics Data System (ADS)
D. Simard, Alexandre; LaRochelle, Sophie
2009-06-01
As data traffic increases on telecommunication networks, optical communication systems must adapt to deal with this increasing bursty traffic. Packet switched networks are considered a good solution to provide efficient bandwidth management. We recently proposed the use of spectra amplitude codes (SAC) to implement all-optical label processing for packet switching and routing. The implementation of this approach requires agile photonic components including filters and lasers. In this paper, we propose a reconfigurable source able to generate the routing codes, which are composed of two wavelengths on a 25 GHz grid. Our solution is to use a cascade of two chirped fibre Bragg gratings (CFBG) in a semiconductor fibre ring laser. The wavelength selection process comes from distributed phase shifts applied on the CFBG that is used in transmission. Those phase shifts are obtained via local thermal perturbations created by resistive chrome lines deposited on a glass plate. The filter resonances are influenced by four parameters: the chrome line positions, the temperature profile along the fibre, the neighbouring heater state (ON/OFF) and the grating itself. Through numerical modeling, these parameters are optimized to design the appropriate chrome line pattern. With this device, we demonstrate successful generation of reconfigurable SAC codes.
Improved Flow Modeling in Transient Reactor Safety Analysis Computer Codes
Holowach, M.J.; Hochreiter, L.E.; Cheung, F.B.
2002-07-01
A method of accounting for fluid-to-fluid shear in between calculational cells over a wide range of flow conditions envisioned in reactor safety studies has been developed such that it may be easily implemented into a computer code such as COBRA-TF for more detailed subchannel analysis. At a given nodal height in the calculational model, equivalent hydraulic diameters are determined for each specific calculational cell using either laminar or turbulent velocity profiles. The velocity profile may be determined from a separate CFD (Computational Fluid Dynamics) analysis, experimental data, or existing semi-empirical relationships. The equivalent hydraulic diameter is then applied to the wall drag force calculation so as to determine the appropriate equivalent fluid-to-fluid shear caused by the wall for each cell based on the input velocity profile. This means of assigning the shear to a specific cell is independent of the actual wetted perimeter and flow area for the calculational cell. The use of this equivalent hydraulic diameter for each cell within a calculational subchannel results in a representative velocity profile which can further increase the accuracy and detail of heat transfer and fluid flow modeling within the subchannel when utilizing a thermal hydraulics systems analysis computer code such as COBRA-TF. Utilizing COBRA-TF with the flow modeling enhancement results in increased accuracy for a coarse-mesh model without the significantly greater computational and time requirements of a full-scale 3D (three-dimensional) transient CFD calculation. (authors)
Physics models in the toroidal transport code PROCTR
Howe, H.C.
1990-08-01
The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
The Overlap Model: A Model of Letter Position Coding
ERIC Educational Resources Information Center
Gomez, Pablo; Ratcliff, Roger; Perea, Manuel
2008-01-01
Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…
Code System for the Analysis of Component Failure Data with a Compound Statistical Model.
2000-08-22
Version 00 Two separate but similar Fortran computer codes have been developed for the analysis of component failure data with a compound statistical model: SAFE-D and SAFE-R. The SAFE-D code (Statistical Analysis for Failure Estimation-failure-on-Demand) analyzes data which give the observed number of failures (failure to respond properly) in a specified number of demands for several similar components that should change their condition upon demand. The second program, SAFE-R (Statistical Analysis for Failure Estimation-failure Rate)more » is to be used to analyze normally operating components for which the observed number of failures in a specified operating time is given. In both these codes the failure parameter (failure probability per demand for SAFE-D or failure rate for SAFE-R) may be assumed equal for all similar components (the homogeneous failure model) or may be assumed to be a random variable distributed among similar components according to a prior distribution (the heterogeneous or compound failure model). Related information can be found at the developer's web site: http://www.mne.ksu.edu/~jks/.« less
Modeling Relativistic Jets Using the Athena Hydrodynamics Code
NASA Astrophysics Data System (ADS)
Pauls, David; Pollack, Maxwell; Wiita, Paul
2014-11-01
We used the Athena hydrodynamics code (Beckwith & Stone 2011) to model early-stage two-dimensional relativistic jets as approximations to the growth of radio-loud active galactic nuclei. We analyzed variability of the radio emission by calculating fluxes from a vertical strip of zones behind a standing shock, as discussed in the accompanying poster. We found the advance speed of the jet bow shock for various input jet velocities and jet-to-ambient density ratios. Faster jets and higher jet densities produce faster shock advances. We investigated the effects of parameters such as the Courant-Friedrichs-Lewy number, the input jet velocity, and the density ratio on the stability of the simulated jet, finding that numerical instabilities grow rapidly when the CFL number is above 0.1. We found that greater jet input velocities and higher density ratios lengthen the time the jet remains stable. We also examined the effects of the boundary conditions, the CFL number, the input jet velocity, the grid resolution, and the density ratio on the premature termination of Athena code. We found that a grid of 1200 by 1000 zones allows the code to run with minimal errors, while still maintaining an adequate resolution. This work is supported by the Mentored Undergraduate Summer Experience program at TCNJ.
NASA Astrophysics Data System (ADS)
Ioan, M.-R.
2016-08-01
In ionizing radiation related experiments, precisely knowing of the involved parameters it is a very important task. Some of these experiments are involving the use of electromagnetic ionizing radiation such are gamma rays and X rays, others make use of energetic charged or not charged small dimensions particles such are protons, electrons, neutrons and even, in other cases, larger accelerated particles such are helium or deuterium nuclei are used. In all these cases the beam used to hit an exposed target must be previously collimated and precisely characterized. In this paper, a novel method to determine the distribution of the collimated beam involving Matlab coding is proposed. The method was implemented by using of some Pyrex glass test samples placed in the beam where its distribution and dimension must be determined, followed by taking high quality pictures of them and then by digital processing the resulted images. By this method, information regarding the doses absorbed in the exposed samples volume are obtained too.
Validated modeling of distributed energy resources at distribution voltages : LDRD project 38672.
Ralph, Mark E.; Ginn, Jerry W.
2004-03-01
A significant barrier to the deployment of distributed energy resources (DER) onto the power grid is uncertainty on the part of utility engineers regarding impacts of DER on their distribution systems. Because of the many possible combinations of DER and local power system characteristics, these impacts can most effectively be studied by computer simulation. The goal of this LDRD project was to develop and experimentally validate models of transient and steady state source behavior for incorporation into utility distribution analysis tools. Development of these models had not been prioritized either by the distributed-generation industry or by the inverter industry. A functioning model of a selected inverter-based DER was developed in collaboration with both the manufacturer and industrial power systems analysts. The model was written in the PSCAD simulation language, a variant of the ElectroMagnetic Transients Program (EMTP), a code that is widely used and accepted by utilities. A stakeholder team was formed and a methodology was established to address the problem. A list of detailed DER/utility interaction concerns was developed and prioritized. The list indicated that the scope of the problem significantly exceeded resources available for this LDRD project. As this work progresses under separate funding, the model will be refined and experimentally validated. It will then be incorporated in utility distribution analysis tools and used to study a variety of DER issues. The key next step will be design of the validation experiments.
EMPIRE: Nuclear Reaction Model Code System for Data Evaluation
Herman, M. Capote, R.; Carlson, B.V.; Oblozinsky, P.; Sin, M.; Trkov, A.; Wienke, H.; Zerkin, V.
2007-12-15
EMPIRE is a modular system of nuclear reaction codes, comprising various nuclear models, and designed for calculations over a broad range of energies and incident particles. A projectile can be a neutron, proton, any ion (including heavy-ions) or a photon. The energy range extends from the beginning of the unresolved resonance region for neutron-induced reactions ({approx} keV) and goes up to several hundred MeV for heavy-ion induced reactions. The code accounts for the major nuclear reaction mechanisms, including direct, pre-equilibrium and compound nucleus ones. Direct reactions are described by a generalized optical model (ECIS03) or by the simplified coupled-channels approach (CCFUS). The pre-equilibrium mechanism can be treated by a deformation dependent multi-step direct (ORION + TRISTAN) model, by a NVWY multi-step compound one or by either a pre-equilibrium exciton model with cluster emission (PCROSS) or by another with full angular momentum coupling (DEGAS). Finally, the compound nucleus decay is described by the full featured Hauser-Feshbach model with {gamma}-cascade and width-fluctuations. Advanced treatment of the fission channel takes into account transmission through a multiple-humped fission barrier with absorption in the wells. The fission probability is derived in the WKB approximation within the optical model of fission. Several options for nuclear level densities include the EMPIRE-specific approach, which accounts for the effects of the dynamic deformation of a fast rotating nucleus, the classical Gilbert-Cameron approach and pre-calculated tables obtained with a microscopic model based on HFB single-particle level schemes with collective enhancement. A comprehensive library of input parameters covers nuclear masses, optical model parameters, ground state deformations, discrete levels and decay schemes, level densities, fission barriers, moments of inertia and {gamma}-ray strength functions. The results can be converted into ENDF-6 formatted
Models for the hotspot distribution
Jurdy, D.M. ); Stefanick, M. )
1990-10-01
Published hotspot catalogues all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 of the ridges is about what is expected.
Overview of the Graphical User Interface for the GERM Code (GCR Event-Based Risk Model
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Cucinotta, Francis A.
2010-01-01
The descriptions of biophysical events from heavy ions are of interest in radiobiology, cancer therapy, and space exploration. The biophysical description of the passage of heavy ions in tissue and shielding materials is best described by a stochastic approach that includes both ion track structure and nuclear interactions. A new computer model called the GCR Event-based Risk Model (GERM) code was developed for the description of biophysical events from heavy ion beams at the NASA Space Radiation Laboratory (NSRL). The GERM code calculates basic physical and biophysical quantities of high-energy protons and heavy ions that have been studied at NSRL for the purpose of simulating space radiobiological effects. For mono-energetic beams, the code evaluates the linear-energy transfer (LET), range (R), and absorption in tissue equivalent material for a given Charge (Z), Mass Number (A) and kinetic energy (E) of an ion. In addition, a set of biophysical properties are evaluated such as the Poisson distribution of ion or delta-ray hits for a specified cellular area, cell survival curves, and mutation and tumor probabilities. The GERM code also calculates the radiation transport of the beam line for either a fixed number of user-specified depths or at multiple positions along the Bragg curve of the particle. The contributions from primary ion and nuclear secondaries are evaluated. The GERM code accounts for the major nuclear interaction processes of importance for describing heavy ion beams, including nuclear fragmentation, elastic scattering, and knockout-cascade processes by using the quantum multiple scattering fragmentation (QMSFRG) model. The QMSFRG model has been shown to be in excellent agreement with available experimental data for nuclear fragmentation cross sections, and has been used by the GERM code for application to thick target experiments. The GERM code provides scientists participating in NSRL experiments with the data needed for the interpretation of their
Comprehensive Nuclear Model Code, Nucleons, Ions, Induced Cross-Sections
2002-09-27
EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined op0tical, Multistep Direct (TUL), Multistep Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus (Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha- particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions andmore » extends up to several hundreds of MeV for the Heavy Ion induced reactions.« less
Comprehensive Nuclear Model Code, Nucleons, Ions, Induced Cross-Sections
2002-09-27
EMPIRE-II is a flexible code for calculation of nuclear reactions in the frame of combined op0tical, Multistep Direct (TUL), Multistep Compound (NVWY) and statistical (Hauser-Feshbach) models. Incident particle can be a nucleon or any nucleus (Heavy Ion). Isomer ratios, residue production cross sections and emission spectra for neutrons, protons, alpha- particles, gamma-rays, and one type of Light Ion can be calculated. The energy range starts just above the resonance region for neutron induced reactions and extends up to several hundreds of MeV for the Heavy Ion induced reactions.
EMPIRE: A Reaction Model Code for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Palumbo, A.; Herman, M.; Capote, R.
2014-06-01
The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.
EMPIRE: A Reaction Model Code for Nuclear Astrophysics
Palumbo, A.; Herman, M.; Capote, R.
2014-06-15
The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.
Stimulus Coding and Synchrony in Stochastic Neuron Models
NASA Astrophysics Data System (ADS)
Cieniak, Jakub
A stochastic leaky integrate-and-fire neuron model was implemented in this study to simulate the spiking activity of the electrosensory "P-unit" receptor neurons of the weakly electric fish Apteronotus leptorhynchus. In the context of sensory coding, these cells have been previously shown to respond in experiment to natural random narrowband signals with either a linear or nonlinear coding scheme, depending on the intrinsic firing rate of the cell in the absence of external stimulation. It was hypothesised in this study that this duality is due to the relation of the stimulus to the neuron's excitation threshold. This hypothesis was validated with the model by lowering the threshold of the neuron or increasing its intrinsic noise, or randomness, either of which made the relation between firing rate and input strength more linear. Furthermore, synchronous P-unit firing to a common input also plays a role in decoding the stimulus at deeper levels of the neural pathways. Synchronisation and desynchronisation between multiple model responses for different types of natural communication signals were shown to agree with experimental observations. A novel result of resonance-induced synchrony enhancement of P-units to certain communication frequencies was also found.
A numerical code for a three-dimensional magnetospheric MHD equilibrium model
NASA Technical Reports Server (NTRS)
Voigt, G.-H.
1992-01-01
Two dimensional and three dimensional MHD equilibrium models were begun for Earth's magnetosphere. The original proposal was motivated by realizing that global, purely data based models of Earth's magnetosphere are inadequate for studying the underlying plasma physical principles according to which the magnetosphere evolves on the quasi-static convection time scale. Complex numerical grid generation schemes were established for a 3-D Poisson solver, and a robust Grad-Shafranov solver was coded for high beta MHD equilibria. Thus, the effects were calculated of both the magnetopause geometry and boundary conditions on the magnetotail current distribution.
Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng
2012-01-01
The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious ‘cliff effect’ may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the ‘cliff effect’ is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected. PMID:23235451
Modelling the Absorption Measurement Distribution (AMD) for Mrk 509
NASA Astrophysics Data System (ADS)
Adhikari, T.; Rozanska, A.; Sobolewska, M.; Czerny, B.
2015-07-01
Absorption Measurement Distribution (AMD) measures the distribution of absorbing column over a range of ionization parameters of the X-ray absorbers in Seyfert galaxies. In this work, we modeled the AMD in Mrk 509 using its recently published broad band Spectral Energy Distribution (SED). This SED is used as an input for radiative transfer computations with full photoionization treatment using the photoionization codes Titan and Cloudy. Assuming a photoionized medium with a uniform total pressure (gas+radiation), we reproduced the discontunity in the observed AMD distribution which is usually described as the region of thermal instability of the absorber. We also studied the structure and properties of the warm absorber in Mrk 509.
Rasanen, Okko J; Saarinen, Jukka P
2016-09-01
Modeling and prediction of temporal sequences is central to many signal processing and machine learning applications. Prediction based on sequence history is typically performed using parametric models, such as fixed-order Markov chains ( n -grams), approximations of high-order Markov processes, such as mixed-order Markov models or mixtures of lagged bigram models, or with other machine learning techniques. This paper presents a method for sequence prediction based on sparse hyperdimensional coding of the sequence structure and describes how higher order temporal structures can be utilized in sparse coding in a balanced manner. The method is purely incremental, allowing real-time online learning and prediction with limited computational resources. Experiments with prediction of mobile phone use patterns, including the prediction of the next launched application, the next GPS location of the user, and the next artist played with the phone media player, reveal that the proposed method is able to capture the relevant variable-order structure from the sequences. In comparison with the n -grams and the mixed-order Markov models, the sparse hyperdimensional predictor clearly outperforms its peers in terms of unweighted average recall and achieves an equal level of weighted average recall as the mixed-order Markov chain but without the batch training of the mixed-order model. PMID:26285224
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
NASA Astrophysics Data System (ADS)
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
Development of a Model and Computer Code to Describe Solar Grade Silicon Production Processes
NASA Technical Reports Server (NTRS)
Srivastava, R.; Gould, R. K.
1979-01-01
The program aims at developing mathematical models and computer codes based on these models, which allow prediction of the product distribution in chemical reactors for converting gaseous silicon compounds to condensed-phase silicon. The major interest is in collecting silicon as a liquid on the reactor walls and other collection surfaces. Two reactor systems are of major interest, a SiCl4/Na reactor in which Si(l) is collected on the flow tube reactor walls and a reactor in which Si(l) droplets formed by the SiCl4/Na reaction are collected by a jet impingement method. During this quarter the following tasks were accomplished: (1) particle deposition routines were added to the boundary layer code; and (2) Si droplet sizes in SiCl4/Na reactors at temperatures below the dew point of Si are being calculated.
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 12 2012-01-01 2012-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 12 2013-01-01 2013-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 12 2014-01-01 2013-01-01 true Voluntary National Model Building Codes E Exhibit E to... Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2) of...
7 CFR Exhibit E to Subpart A of... - Voluntary National Model Building Codes
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 12 2011-01-01 2011-01-01 false Voluntary National Model Building Codes E Exhibit E... National Model Building Codes The following documents address the health and safety aspects of buildings and related structures and are voluntary national model building codes as defined in § 1924.4(h)(2)...
Graphical Models via Univariate Exponential Family Distributions
Yang, Eunho; Ravikumar, Pradeep; Allen, Genevera I.; Liu, Zhandong
2016-01-01
Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions. PMID:27570498
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it
New high burnup fuel models for NRC`s licensing audit code, FRAPCON
Lanning, D.D.; Beyer, C.E.; Painter, C.L.
1996-03-01
Fuel behavior models have recently been updated within the U.S. Nuclear Regulatory Commission steady-state FRAPCON code used for auditing of fuel vendor/utility-codes and analyses. These modeling updates have concentrated on providing a best estimate prediction of steady-state fuel behavior up to the maximum burnup level s of current data (60 to 65 GWd/MTU rod-average). A decade has passed since these models were last updated. Currently, some U.S. utilities and fuel vendors are requesting approval for rod-average burnups greater than 60 GWd/MTU; however, until these recent updates the NRC did not have valid fuel performance models at these higher burnup levels. Pacific Northwest Laboratory (PNL) has reviewed 15 separate effects models within the FRAPCON fuel performance code (References 1 and 2) and identified nine models that needed updating for improved prediction of fuel behavior at high burnup levels. The six separate effects models not updated were the cladding thermal properties, cladding thermal expansion, cladding creepdown, fuel specific heat, fuel thermal expansion and open gap conductance. Comparison of these models to the currently available data indicates that these models still adequately predict the data within data uncertainties. The nine models identified as needing improvement for predicting high-burnup behavior are fission gas release (FGR), fuel thermal conductivity (accounting for both high burnup effects and burnable poison additions), fuel swelling, fuel relocation, radial power distribution, fuel-cladding contact gap conductance, cladding corrosion, cladding mechanical properties and cladding axial growth. Each of the updated models will be described in the following sections and the model predictions will be compared to currently available high burnup data.
A Simple Model of Optimal Population Coding for Sensory Systems
Doi, Eizaburo; Lewicki, Michael S.
2014-01-01
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery. PMID:25121492
Kinetic models of gene expression including non-coding RNAs
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2011-03-01
In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.
Stimulus-dependent Maximum Entropy Models of Neural Population Codes
Segev, Ronen; Schneidman, Elad
2013-01-01
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339
Physicochemical analog for modeling superimposed and coded memories
NASA Astrophysics Data System (ADS)
Ensanian, Minas
1992-07-01
The mammalian brain is distinguished by a life-time of memories being stored within the same general region of physicochemical space, and having two extraordinary features. First, memories to varying degrees are superimposed, as well as coded. Second, instantaneous recall of past events can often be affected by relatively simple, and seemingly unrelated sensory clues. For the purposes of attempting to mathematically model such complex behavior, and for gaining additional insights, it would be highly advantageous to be able to simulate or mimic similar behavior in a nonbiological entity where some analogical parameters of interest can reasonably be controlled. It has recently been discovered that in nonlinear accumulative metal fatigue memories (related to mechanical deformation) can be superimposed and coded in the crystal lattice, and that memory, that is, the total number of stress cycles can be recalled (determined) by scanning not the surfaces but the `edges' of the objects. The new scanning technique known as electrotopography (ETG) now makes the state space modeling of metallic networks possible. The author provides an overview of the new field and outlines the areas that are of immediate interest to the science of artificial neural networks.
Barths, H.; Felsch, C.; Peters, N.
2008-11-15
The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)
Barths, H.; Felsch, C.; Peters, N.
2009-01-15
The objective of this work is the development of a consistent mixing model for the two-way-coupling of a CFD code and a multi-zone code based on multiple zero-dimensional reactors. The two-way-coupling allows for a computationally efficient modeling of HCCI combustion. The physical domain in the CFD code is subdivided into multiple zones based on three phase variables (fuel mixture fraction, dilution, and total enthalpy). Those phase variables are sufficient for the description of the thermodynamic state of each zone, assuming that each zone is at the same pressure. Each zone in the CFD code is represented by a corresponding zone in the zero-dimensional code. The zero-dimensional code solves the chemistry for each zone, and the heat release is fed back into the CFD code. The difficulty in facing this kind of methodology is to keep the thermodynamic state of each zone consistent between the CFD code and the zero-dimensional code after the initialization of the zones in the multi-zone code has taken place. The thermodynamic state of each zone (and thereby the phase variables) will change in time due to mixing and source terms (e.g., vaporization of fuel, wall heat transfer). The focus of this work lies on a consistent description of the mixing between the zones in phase space in the zero-dimensional code, based on the solution of the CFD code. Two mixing models with different degrees of accuracy, complexity, and numerical effort are described. The most elaborate mixing model (and an appropriate treatment of the source terms) keeps the thermodynamic state of the zones in the CFD code and the zero-dimensional code identical. The models are applied to a test case of HCCI combustion in an engine. (author)
New trends in species distribution modelling
Zimmermann, Niklaus E.; Edwards, Thomas C., Jr.; Graham, Catherine H.; Pearman, Peter B.; Svenning, Jens-Christian
2010-01-01
Species distribution modelling has its origin in the late 1970s when computing capacity was limited. Early work in the field concentrated mostly on the development of methods to model effectively the shape of a species' response to environmental gradients (Austin 1987, Austin et al. 1990). The methodology and its framework were summarized in reviews 10–15 yr ago (Franklin 1995, Guisan and Zimmermann 2000), and these syntheses are still widely used as reference landmarks in the current distribution modelling literature. However, enormous advancements have occurred over the last decade, with hundreds – if not thousands – of publications on species distribution model (SDM) methodologies and their application to a broad set of conservation, ecological and evolutionary questions. With this special issue, originating from the third of a set of specialized SDM workshops (2008 Riederalp) entitled 'The Utility of Species Distribution Models as Tools for Conservation Ecology', we reflect on current trends and the progress achieved over the last decade.
Douglas Porter; Steve Hayes; Various
2014-06-01
The Advanced Fuels Campaign (AFC) metallic fuels currently being tested have higher zirconium and plutonium concentrations than those tested in the past in EBR reactors. Current metal fuel performance codes have limitations and deficiencies in predicting AFC fuel performance, particularly in the modeling of constituent distribution. No fully validated code exists due to sparse data and unknown modeling parameters. Our primary objective is to develop an initial analysis tool by incorporating state-of-the-art knowledge, constitutive models and properties of AFC metal fuels into the MOOSE/BISON (1) framework in order to analyze AFC metallic fuel tests.
Caveats for correlative species distribution modeling
Jarnevich, Catherine S.; Stohlgren, Thomas J.; Kumar, Sunil; Morisette, Jeffrey T.; Holcombe, Tracy R.
2015-01-01
Correlative species distribution models are becoming commonplace in the scientific literature and public outreach products, displaying locations, abundance, or suitable environmental conditions for harmful invasive species, threatened and endangered species, or species of special concern. Accurate species distribution models are useful for efficient and adaptive management and conservation, research, and ecological forecasting. Yet, these models are often presented without fully examining or explaining the caveats for their proper use and interpretation and are often implemented without understanding the limitations and assumptions of the model being used. We describe common pitfalls, assumptions, and caveats of correlative species distribution models to help novice users and end users better interpret these models. Four primary caveats corresponding to different phases of the modeling process, each with supporting documentation and examples, include: (1) all sampling data are incomplete and potentially biased; (2) predictor variables must capture distribution constraints; (3) no single model works best for all species, in all areas, at all spatial scales, and over time; and (4) the results of species distribution models should be treated like a hypothesis to be tested and validated with additional sampling and modeling in an iterative process.
A new computer code for discrete fracture network modelling
NASA Astrophysics Data System (ADS)
Xu, Chaoshui; Dowd, Peter
2010-03-01
The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.
Secondary neutron source modelling using MCNPX and ALEPH codes
NASA Astrophysics Data System (ADS)
Trakas, Christos; Kerkar, Nordine
2014-06-01
Monitoring the subcritical state and divergence of reactors requires the presence of neutron sources. But mainly secondary neutrons from these sources feed the ex-core detectors (SRD, Source Range Detector) whose counting rate is correlated with the level of the subcriticality of reactor. In cycle 1, primary neutrons are provided by sources activated outside of the reactor (e.g. Cf252); part of this source can be used for the divergence of cycle 2 (not systematic). A second family of neutron sources is used for the second cycle: the spontaneous neutrons of actinides produced after irradiation of fuel in the first cycle. Both families of sources are not sufficient to efficiently monitor the divergence of the second cycles and following ones, in most reactors. Secondary sources cluster (SSC) fulfil this role. In the present case, the SSC [Sb, Be], after activation in the first cycle (production of Sb124, unstable), produces in subsequent cycles a photo-neutron source by gamma (from Sb124)-neutron (on Be9) reaction. This paper presents the model of the process between irradiation in cycle 1 and cycle 2 results for SRD counting rate at the beginning of cycle 2, using the MCNPX code and the depletion chain ALEPH-V1 (coupling of MCNPX and ORIGEN codes). The results of this simulation are compared with two experimental results of the PWR 1450 MWe-N4 reactors. A good agreement is observed between these results and the simulations. The subcriticality of the reactors is about at -15,000 pcm. Discrepancies on the SRD counting rate between calculations and measurements are in the order of 10%, lower than the combined uncertainty of measurements and code simulation. This comparison validates the AREVA methodology, which allows having an SRD counting rate best-estimate for cycles 2 and next ones and optimizing the position of the SSC, depending on the geographic location of sources, main parameter for optimal monitoring of subcritical states.
Inter-bit prediction based on maximum likelihood estimate for distributed video coding
NASA Astrophysics Data System (ADS)
Klepko, Robert; Wang, Demin; Huchet, Grégory
2010-01-01
Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.
Transversity distribution functions in the valon model
NASA Astrophysics Data System (ADS)
Alizadeh Yazdi, Z.; Taghavi-Shahri, F.; Arash, F.; Zomorrodian, M. E.
2014-05-01
We use the valon model to calculate the transversity distribution functions inside the nucleon. Transversity distributions indicate the probability to find partons with spin aligned (antialigned) to the transversely polarized nucleon. The results are in good agreement with all available experimental data and also global fits.
Indiana Distributive Education Competency Based Model.
ERIC Educational Resources Information Center
Davis, Rod; And Others
This Indiana distributive education competency-based curriculum model is designed to help teachers and local administrators plan and conduct a comprehensive marketing and distributive education program. It is divided into three levels--one level for each year of a three-year program. The competencies common to a variety of marketing and…
Ghodoosian, N.
1984-05-01
An analytical model leading to the pressure distribution on the cross section of a Darrieus Rotor Blade (airfoil) has veen constructed. The model is based on the inviscid flow theory and the contribution of the nonsteady wake vortices was neglected. The analytical model was translated into a computer code in order to study a variety of boundary conditions encountered by the rotating blades of the Darrieus Rotor. Results indicate that, for a pitching airfoil, lift can be adequately approximated by the Kutta-Joukowski forces, despite notable deviations in the pressure distribution on the airfoil. These deviations are most significant at the upwind half of the Darrieus Rotor where higher life is accompanied by increased adverse pressure gradients. The effect of pitching on lift can be approximated by a linear shift in the angle of attack proportional to the blade angular velocity. Tabulation of the fluid velocity about the pitching-only NACA 0015 allowed the principle of superposition to be used to determine the fluid velocity about a translating and pitching airfoil.
A Bayesian observer model constrained by efficient coding can explain 'anti-Bayesian' percepts.
Wei, Xue-Xin; Stocker, Alan A
2015-10-01
Bayesian observer models provide a principled account of the fact that our perception of the world rarely matches physical reality. The standard explanation is that our percepts are biased toward our prior beliefs. However, reported psychophysical data suggest that this view may be simplistic. We propose a new model formulation based on efficient coding that is fully specified for any given natural stimulus distribution. The model makes two new and seemingly anti-Bayesian predictions. First, it predicts that perception is often biased away from an observer's prior beliefs. Second, it predicts that stimulus uncertainty differentially affects perceptual bias depending on whether the uncertainty is induced by internal or external noise. We found that both model predictions match reported perceptual biases in perceived visual orientation and spatial frequency, and were able to explain data that have not been explained before. The model is general and should prove applicable to other perceptual variables and tasks. PMID:26343249
Torus mapper: a code for dynamical models of galaxies
NASA Astrophysics Data System (ADS)
Binney, James; McMillan, Paul J.
2016-02-01
We present a freely downloadable software package for modelling the dynamics of galaxies, which we call the Torus Mapper (TM). The package is based around `torus mapping', which is a non-perturbative technique for creating orbital tori for specified values of the action integrals. Given an orbital torus and a star's position at a reference time, one can compute its position at any other time, no matter how remote. One can also compute the velocities with which the star will pass through any given point and the contribution it will make to the time-averaged density there. A system of angle-action coordinates for the given potential can be created by foliating phase space with orbital tori. Such a foliation is facilitated by the ability of TM to create tori by interpolating on a grid of tori. We summarize the advantages of using TM rather than a standard time-stepper to create orbits, and give segments of code that illustrate applications of TM in several contexts, including setting up initial conditions for an N-body simulation. We examine the precision of the orbital tori created by TM and the behaviour of the code when orbits become trapped by a resonance.
ERIC Educational Resources Information Center
American Inst. of Architects, Washington, DC.
A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…
Modeling neural activity with cumulative damage distributions.
Leiva, Víctor; Tejo, Mauricio; Guiraud, Pierre; Schmachtenberg, Oliver; Orio, Patricio; Marmolejo-Ramos, Fernando
2015-10-01
Neurons transmit information as action potentials or spikes. Due to the inherent randomness of the inter-spike intervals (ISIs), probabilistic models are often used for their description. Cumulative damage (CD) distributions are a family of probabilistic models that has been widely considered for describing time-related cumulative processes. This family allows us to consider certain deterministic principles for modeling ISIs from a probabilistic viewpoint and to link its parameters to values with biological interpretation. The CD family includes the Birnbaum-Saunders and inverse Gaussian distributions, which possess distinctive properties and theoretical arguments useful for ISI description. We expand the use of CD distributions to the modeling of neural spiking behavior, mainly by testing the suitability of the Birnbaum-Saunders distribution, which has not been studied in the setting of neural activity. We validate this expansion with original experimental and simulated electrophysiological data. PMID:25998210
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550
Photoplus: auxiliary information for printed images based on distributed source coding
NASA Astrophysics Data System (ADS)
Samadani, Ramin; Mukherjee, Debargha
2008-01-01
A printed photograph is difficult to reuse because the digital information that generated the print may no longer be available. This paper describes a mechanism for approximating the original digital image by combining a scan of the printed photograph with small amounts of digital auxiliary information kept together with the print. The auxiliary information consists of a small amount of digital data to enable accurate registration and color-reproduction, followed by a larger amount of digital data to recover residual errors and lost frequencies by distributed Wyner-Ziv coding techniques. Approximating the original digital image enables many uses, including making good quality reprints from the original print, even when they are faded many years later. In essence, the print itself becomes the currency for archiving and repurposing digital images, without requiring computer infrastructure.
CODE's new solar radiation pressure model for GNSS orbit determination
NASA Astrophysics Data System (ADS)
Arnold, D.; Meindl, M.; Beutler, G.; Dach, R.; Schaer, S.; Lutz, S.; Prange, L.; Sośnica, K.; Mervart, L.; Jäggi, A.
2015-08-01
The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009-2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft's solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which
Documentation of the GLAS fourth order general circulation model. Volume 2: Scalar code
NASA Technical Reports Server (NTRS)
Kalnay, E.; Balgovind, R.; Chao, W.; Edelmann, D.; Pfaendtner, J.; Takacs, L.; Takano, K.
1983-01-01
Volume 2, of a 3 volume technical memoranda contains a detailed documentation of the GLAS fourth order general circulation model. Volume 2 contains the CYBER 205 scalar and vector codes of the model, list of variables, and cross references. A variable name dictionary for the scalar code, and code listings are outlined.
Incorporating uncertainty in predictive species distribution modelling
Beale, Colin M.; Lennon, Jack J.
2012-01-01
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates. PMID:22144387
Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.
2014-11-01
Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometrymore » Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.« less
Statistical Model Code System to Calculate Particle Spectra from HMS Precompound Nucleus Decay.
Blann, Marshall
2014-11-01
Version 05 The HMS-ALICE/ALICE codes address the question: What happens when photons,nucleons or clusters/heavy ions of a few 100 kV to several 100 MeV interact with nuclei? The ALICE codes (as they have evolved over 50 years) use several nuclear reaction models to answer this question, predicting the energies and angles of particles emitted (n,p,2H,3H,3He,4He,6Li) in the reaction, and the residues, the spallation and fission products. Models used are principally Monte-Carlo formulations of the Hybrid/Geometry Dependent Hybrid precompound, Weisskopf-Ewing evaporation, Bohr Wheeler fission, and recently a Fermi stastics break-up model( for light nuclei). Angular distribution calculation relies on the Chadwick-Oblozinsky linear momentum conservation model. Output gives residual product yields, and single and double differential cross sections for ejectiles in lab and CM frames. An option allows 1-3 particle out exclusive (ENDF format) for all combinations of n,p,alpha channels. Product yields include estimates of isomer yields where isomers exist. Earlier versions included the ability to compute coincident particle emission correlations, and much of this coding is still in place. Recoil product ddcs are computed, but not presently written to output files. Code execution begins with an on-screen interrogation for input, with defaults available for many aspects. A menu of model options is available within the input interrogation screen. The input is saved to hard drive. Subsequent runs may use this file, use the file with line editor changes, or begin again with the on-line interrogation.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
A simple way to model nebulae with distributed ionizing stars
NASA Astrophysics Data System (ADS)
Jamet, L.; Morisset, C.
2008-04-01
Aims: This work is a follow-up of a recent article by Ercolano et al. that shows that, in some cases, the spatial dispersion of the ionizing stars in a given nebula may significantly affect its emission spectrum. The authors found that the dispersion of the ionizing stars is accompanied by a decrease in the ionization parameter, which at least partly explains the variations in the nebular spectrum. However, they did not research how other effects associated to the dispersion of the stars may contribute to those variations. Furthermore, they made use of a unique and simplified set of stellar populations. The scope of the present article is to assess whether the variation in the ionization parameter is the dominant effect in the dependence of the nebular spectrum on the distribution of its ionizing stars. We examined this possibility for various regimes of metallicity and age. We also investigated a way to model the distribution of the ionizing sources so as to bypass expensive calculations. Methods: We wrote a code able to generate random stellar populations and to compute the emission spectra of their associated nebulae through the widespread photoionization code cloudy. This code can process two kinds of spatial distributions of the stars: one where all the stars are concentrated at one point, and one where their separation is such that their Strömgren spheres do not overlap. Results: We found that, in most regimes of stellar population ages and gas metallicities, the dependence of the ionization parameter on the distribution of the stars is the dominant factor in the variation of the main nebular diagnostics with this distribution. We derived a method to mimic those effects with a single calculation that makes use of the common assumptions of a central source and a spherical nebula, in the case of constant density objects. This represents a computation time saving by a factor of at least several dozen in the case of H ii regions ionized by massive clusters.
Marked renewal model of smoothed VBR MPEG coded traffic
NASA Astrophysics Data System (ADS)
Hui, Xiaoshi; Li, Jiaoyang; Liu, Xiande
1998-08-01
In this paper, a method of smoothing variable bit-rate (VBR) MPEG traffic is proposed. A buffer, which has capacity over the peak bandwidth of group of picture (GOP) sequence of an MPEG traffic and which output rate is controlled by the distribution of GOP sequence, is connected to a source. The degree of burst of output stream from the buffer is deceased, and the stream's autocorrelation function characterizes non-increased and non-convex property. For smoothed MPEG traffic stream, the GOP sequence is the element target source traffic using for modeling. We applied a marked renewal process to model the GOP smoothed VBR MPEG traffics. The numerical study of simulating target VBR MPEG video source with a marked renewal model shows that not only the model's bandwidth distribution can match accurately that of target source sequence, but also its leading autocorrelation can approximate the long-range dependence of a VBR MPEG traffic as well as the short-range dependence. In addition to that, the model's parameters estimation is very easy. We conclude that GOP smoothed VBR MPEG video traffic could be not only transferred more efficiently but also analyzed perfectly with a marked renewal traffic model.
Zhao, L.; Cluggish, B.; Kim, J. S.; Pardo, R.; Vondrasek, R.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recent charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.
Applying various algorithms for species distribution modelling.
Li, Xinhai; Wang, Yuan
2013-06-01
Species distribution models have been used extensively in many fields, including climate change biology, landscape ecology and conservation biology. In the past 3 decades, a number of new models have been proposed, yet researchers still find it difficult to select appropriate models for data and objectives. In this review, we aim to provide insight into the prevailing species distribution models for newcomers in the field of modelling. We compared 11 popular models, including regression models (the generalized linear model, the generalized additive model, the multivariate adaptive regression splines model and hierarchical modelling), classification models (mixture discriminant analysis, the generalized boosting model, and classification and regression tree analysis) and complex models (artificial neural network, random forest, genetic algorithm for rule set production and maximum entropy approaches). Our objectives are: (i) to compare the strengths and weaknesses of the models, their characteristics and identify suitable situations for their use (in terms of data type and species-environment relationships) and (ii) to provide guidelines for model application, including 3 steps: model selection, model formulation and parameter estimation. PMID:23731809
Modelling RF sources using 2-D PIC codes
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.
Modeling Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
Modelling RF sources using 2-D PIC codes
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.
Thermohydraulic modeling of the nuclear thermal rocket: The KLAXON code
Hall, M.L.; Rider, W.J.; Cappiello, M.W. )
1992-01-01
Nuclear thermal rockets (NTRs) have been proposed as a means of propulsion for the Space Exploration Initiative (SEI, the manned mission to Mars). The NTR derives its thrust from the expulsion of hot supersonic hydrogen gas. A large tank on the rocket stores hydrogen in liquid or slush form, which is pumped by a turbopump through a nuclear reactor to provide the necessary heat. The path that the hydrogen takes is most circuitous, making several passes through the reactor and the nozzle itself (to provide cooling), as well as two passes through the turbopump (to transfer momentum). The proposed fuel elements for the reactor have two different configurations: solid prismatic fuel and particle-bed fuel. There are different design concerns for the two types of fuel, but there are also many fluid flow aspects that they share. The KLAXON code was used to model a generic NTR design from the inlet of the reactor core to the exit from the nozzle.
Modelling Radiative Stellar Winds with the SIMECA Code
NASA Astrophysics Data System (ADS)
Stee, Ph.
Using the SIMECA code developped by Stee & Araùjo ([CITE]), we report theoretical HI visible and near-IR line profiles, i.e. Hα (6562 Å), Hβ (4861 Å) and Brγ (21 656 Å), and intensity maps for a large set of parameters representative of early to late Be spectral types. We have computed the size of the emitting region in the Brγ line and its nearby continuum which both originate from a very extended region, i.e. at least 40 stellar radii which is twice the size of the Hα emitting region. We predict the relative fluxes from the central star, the envelope contribution in the given lines and in the continuum for a wide range of parameters characterizing the disk models. Finally, we have also studied the effect of changing the spectral type on our results and we obtain a clear correlation between the luminosity in Hα and in the infrared.
2011-01-01
Background Predicting the geographic distribution of widespread species through modeling is problematic for several reasons including high rates of omission errors. One potential source of error for modeling widespread species is that subspecies and/or races of species are frequently pooled for analyses, which may mask biologically relevant spatial variation within the distribution of a single widespread species. We contrast a presence-only maximum entropy model for the widely distributed oldfield mouse (Peromyscus polionotus) that includes all available presence locations for this species, with two composite maximum entropy models. The composite models either subdivided the total species distribution into four geographic quadrants or by fifteen subspecies to capture spatially relevant variation in P. polionotus distributions. Results Despite high Area Under the ROC Curve (AUC) values for all models, the composite species distribution model of P. polionotus generated from individual subspecies models represented the known distribution of the species much better than did the models produced by partitioning data into geographic quadrants or modeling the whole species as a single unit. Conclusions Because the AUC values failed to describe the differences in the predictability of the three modeling strategies, we suggest using omission curves in addition to AUC values to assess model performance. Dividing the data of a widespread species into biologically relevant partitions greatly increased the performance of our distribution model; therefore, this approach may prove to be quite practical and informative for a wide range of modeling applications. PMID:21929792
Statistical model with a standard Γ distribution
NASA Astrophysics Data System (ADS)
Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo
2004-07-01
We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Parodi, K; Ferrari, A; Sommerer, F; Paganetti, H
2008-01-01
Clinical investigations on post-irradiation PET/CT (positron emission tomography / computed tomography) imaging for in-vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modeling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield Unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation
NASA Astrophysics Data System (ADS)
Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.
2007-07-01
Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation
Modeling steady sea water intrusion with single-density groundwater codes.
Bakker, Mark; Schaars, Frans
2013-01-01
Steady interface flow in heterogeneous aquifer systems is simulated with single-density groundwater codes by using transformed values for the hydraulic conductivity and thickness of the aquifers and aquitards. For example, unconfined interface flow may be simulated with a transformed model by setting the base of the aquifer to sea level and by multiplying the hydraulic conductivity with 41 (for sea water density of 1025 kg/m(3)). Similar transformations are derived for unconfined interface flow with a finite aquifer base and for confined multi-aquifer interface flow. The head and flow distribution are identical in the transformed and original model domains. The location of the interface is obtained through application of the Ghyben-Herzberg formula. The transformed problem may be solved with a single-density code that is able to simulate unconfined flow where the saturated thickness is a linear function of the head and, depending on the boundary conditions, the code needs to be able to simulate dry cells where the saturated thickness is zero. For multi-aquifer interface flow, an additional requirement is that the code must be able to handle vertical leakage in situations where flow in an aquifer is unconfined while there is also flow in the aquifer directly above it. Specific examples and limitations are discussed for the application of the approach with MODFLOW. Comparisons between exact interface flow solutions and MODFLOW solutions of the transformed model domain show good agreement. The presented approach is an efficient alternative to running transient sea water intrusion models until steady state is reached. PMID:22716037
TRAC (Transient Reactor Analysis Code) model of reactor vent paths
Pevey, R.E.; Reece, J.W.
1987-12-18
The Safety Methods group of Scientific Computations Division (SCD) is currently calculating assembly power limits based on reactor response to a double-ended guillotine pipe break loss of coolant accident (LOCA). SCD has implemented a two-level approach in which the Transient Reactor Analysis Code (TRAC) is used to calculate the system pressure response to the LOCA, and these pressures serve as the boundary conditions for a detailed assembly calculation using FLOWTRAN. As part of the TRAC calculation, a detailed TRAC model of the reactor vent paths has been developed that involves the hardware in the top portion of the reactor tank through which air flows as the moderator tank drains following the LOCA initiation. The hardware included in this model are the top shield (with its many penetrations), the gas space above the top shield, the vacuum breakers, the U tube, the helium blanket gas system, and the gas ports. This detailed model is necessary for an accurate calculation of the tank pressures in the first few seconds of the LOCA because the initial tank depressurization is relieved through these vent paths. The tank pressures for about 5 seconds into the transient are sensitive to water flow from the gas space through the top shield, the associated expansion pressure drop of the blanket gas, and the clearing of the vacuum breakers and gas ports. This model was added to a previously developed TRAC model of the rest of the system and the resulting full system model was used to calculate the pressure response during the first few seconds of the LOCA. 8 refs., 8 figs.
Challenges and perspectives for species distribution modelling in the neotropics
Kamino, Luciana H. Y.; Stehmann, João Renato; Amaral, Silvana; De Marco, Paulo; Rangel, Thiago F.; de Siqueira, Marinez F.; De Giovanni, Renato; Hortal, Joaquín
2012-01-01
The workshop ‘Species distribution models: applications, challenges and perspectives’ held at Belo Horizonte (Brazil), 29–30 August 2011, aimed to review the state-of-the-art in species distribution modelling (SDM) in the neotropical realm. It brought together researchers in ecology, evolution, biogeography and conservation, with different backgrounds and research interests. The application of SDM in the megadiverse neotropics—where data on species occurrences are scarce—presents several challenges, involving acknowledging the limitations imposed by data quality, including surveys as an integral part of SDM studies, and designing the analyses in accordance with the question investigated. Specific solutions were discussed, and a code of good practice in SDM studies and related field surveys was drafted. PMID:22031720
A dynamic p53-mdm2 model with distributed delay
NASA Astrophysics Data System (ADS)
Horhat, Raluca; Horhat, Raul Florin
2014-12-01
Specific activator and repressor transcription factors which bind to specific regulator DNA sequences, play an important role in gene activity control. Interactions between genes coding such transcripion factors should explain the different stable or sometimes oscillatory gene activities characteristic for different tissues. In this paper, the dynamic P53-Mdm2 interaction model with distributed delays is investigated. Both weak and Dirac kernels are taken into consideration. For Dirac case, the Hopf bifurcation is investigated. Some numerical examples are finally given for justifying the theoretical results.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
Radionuclide sorption modeling using the MINTEQA2 speciation code
Turner, D.R.; Griffin, T.; Dietrich, T.B.
1993-12-31
The MINTEQA2 database has been updated and expanded to include radionuclide data from the most recent release of the EQ3/6 database. Comparison of U(VI)-speciation predicted using the old and new MINTEQA2 databases indicates several significant differences, including the introduction of neutral and anionic species at neutral to alkaline pH. In contrast, comparison of results calculated by EQ3 and MINTEQA2, both using Nuclear Energy Agency (NEA) uranium data, reveals only small differences that are likely due to differences in calculated activity coefficients. With the new database, MINTEQA2 was used to model U(VU)-goethite sorption data from the literature with the Triple-Layer Model (TLM). Values were independently fixed for all but one of the model parameters. The parameter optimization code FITEQL was then used to determine binding constants for mononuclear uranium complexes (UO{sub 2}(OH){sub a}{sup 2-n}). The surface complex MOH{sub 2}-UO{sub 2}(OH){sub 4}{sup -} produced a very good fit of the sorption data, which was not significantly improved by the use of two or more surface complexes.
Modeling of MHD edge containment in strip casting with ELEKTRA and CaPS-EM codes
Chang, F. C.
2000-01-12
This paper presents modeling studies of magnetohydrodynamics analysis in twin-roll casting. Argonne National Laboratory (ANL) and ISPAT Inland Inc. (Inland), formerly Inland Steel Co., have worked together to develop a three-dimensional (3-D) computer model that can predict eddy currents, fluid flows, and liquid metal containment of an electromagnetic (EM) edge containment device. The model was verified by comparing predictions with experimental results of liquid metal containment and fluid flow in EM edge dams (EMDs) that were designed at Inland for twin-roll casting. This mathematical model can significantly shorten casting research on the use of EM fields for liquid metal containment and control. The model can optimize the EMD design so it is suitable for application, and minimize expensive time-consuming full-scale testing. Numerical simulation was performed by coupling a 3-D finite-element EM code (ELEKTRA) and a 3-D finite-difference fluids code (CaPS-EM) to solve heat transfer, fluid flow, and turbulence transport in a casting process that involves EM fields. ELEKTRA can predict the eddy-current distribution and the EM forces in complex geometries. CaPS-EM can model fluid flows with free surfaces. The computed 3-D magnetic fields and induced eddy currents in ELEKTRA are used as input to temperature- and flow-field computations in CaPS-EM. Results of the numerical simulation compared well with measurements obtained from both static and dynamic tests.
Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas
2010-03-01
Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949
NASA Astrophysics Data System (ADS)
Swanekamp, S. B.; Oliver, B. V.; Grossmann, J. M.; Smithe, D.; Ludeking, L.
1996-11-01
The current understanding of plasma opening switch (POS) operation is as follows. During the conduction phase the switch plasma is redistributed by MHD forces. This redistribution of mass leads to the formation of a low density region in the switch where a 1-3 mm gap in the plasma is believed to form as the switch opens and magnetic energy is transferred between the primary storage inductor and the load. The processes of gap formation and power delivery are not very well understood. It is generally accepted that the assumptions of MHD theory are not valid during the gap formation and power delivery processes because electron inertia and the lack of space-charge neutrality are expected to play a key role. To study non-MHD processes during the gap formation process and power delivery phase of the POS, we have developed a technique for importing an arbitrary state of an MHD code into the PIC code MAGIC. At present the plasma kinetic pressure is ignored during the initialization of particles. Work supported by Defense Nuclear Agency. ^ JAYCOR, Vienna, VA 22102. ^ NRL-NRC Research Associate.
PHASE-OTI: A pre-equilibrium model code for nuclear reactions calculations
NASA Astrophysics Data System (ADS)
Elmaghraby, Elsayed K.
2009-09-01
The present work focuses on a pre-equilibrium nuclear reaction code (based on the one, two and infinity hypothesis of pre-equilibrium nuclear reactions). In the PHASE-OTI code, pre-equilibrium decays are assumed to be single nucleon emissions, and the statistical probabilities come from the independence of nuclei decay. The code has proved to be a good tool to provide predictions of energy-differential cross sections. The probability of emission was calculated statistically using bases of hybrid model and exciton model. However, more precise depletion factors were used in the calculations. The present calculations were restricted to nucleon-nucleon interactions and one nucleon emission. Program summaryProgram title: PHASE-OTI Catalogue identifier: AEDN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5858 No. of bytes in distributed program, including test data, etc.: 149 405 Distribution format: tar.gz Programming language: Fortran 77 Computer: Pentium 4 and Centrino Duo Operating system: MS Windows RAM: 128 MB Classification: 17.12 Nature of problem: Calculation of the differential cross section for nucleon induced nuclear reaction in the framework of pre-equilibrium emission model. Solution method: Single neutron emission was treated by assuming occurrence of the reaction in successive steps. Each step is called phase because of the phase transition nature of the theory. The probability of emission was calculated statistically using bases of hybrid model [1] and exciton model [2]. However, more precise depletion factor was used in the calculations. Exciton configuration used in the code is that described in earlier work [3]. Restrictions: The program is restricted to single nucleon emission and nucleon
Automated Verification of Code Generated from Models: Comparing Specifications with Observations
NASA Astrophysics Data System (ADS)
Gerlich, R.; Sigg, D.; Gerlich, R.
2008-08-01
The interest for automatic code generation from models is increasing. A specification is expressed as model and verification and validation is performed in the application domain. Once the model is formally correct and complete, code can be generated automatically. The general belief is that this code should be correct as well. However, this might be not true: Many parameters impact the generation of code and its correctness: it depends on conditions changing from application to application, the properties of the code depend on the environment where it is executed. From the principles of ISVV (Independent Software Verification and Validation) it even must be doubted that the automatically generated code is correct. Therefore an additional activity is required proving the correctness of the whole chain from modelling level down to execution on the target platform. Certification of a code generator is the state-of-the-art approach dealing with such risks,. Scade [1] was the first code generator certified according to DO178B. The certification costs are a significant disadvantage of this certification approach. All codes needs to be analysed manually, and this procedure has to be repeated for recertification after each maintenance step. But certification does not guarantee at all that the generated code does comply with the model. Certification is based on compliance of the code of the code generator with given standards. Such compliance never can guarantee correctness of the whole chain through transformation down to the environment for execution, though the belief is that certification implies well-formed code at a reduced fault rate. The approach presented here goes a direction different from manual certification.. It is guided by the idea of automated proof: each time code is generated from a model the properties of the code when being executed in its environment are compared with the properties specified in the model. This allows to conclude on the correctness of
Evaluation of turbulence models in the PARC code for transonic diffuser flows
NASA Technical Reports Server (NTRS)
Georgiadis, N. J.; Drummond, J. E.; Leonard, B. P.
1994-01-01
Flows through a transonic diffuser were investigated with the PARC code using five turbulence models to determine the effects of turbulence model selection on flow prediction. Three of the turbulence models were algebraic models: Thomas (the standard algebraic turbulence model in PARC), Baldwin-Lomax, and Modified Mixing Length-Thomas (MMLT). The other two models were the low Reynolds number k-epsilon models of Chien and Speziale. Three diffuser flows, referred to as the no-shock, weak-shock, and strong-shock cases, were calculated with each model to conduct the evaluation. Pressure distributions, velocity profiles, locations of shocks, and maximum Mach numbers in the duct were the flow quantities compared. Overall, the Chien k-epsilon model was the most accurate of the five models when considering results obtained for all three cases. However, the MMLT model provided solutions as accurate as the Chien model for the no-shock and the weak-shock cases, at a substantially lower computational cost (measured in CPU time required to obtain converged solutions). The strong shock flow, which included a region of shock-induced flow separation, was only predicted well by the two k-epsilon models.
Modeling ion exchange in clinoptilolite using the EQ3/6 geochemical modeling code
Viani, B.E.; Bruton, C.J.
1992-06-01
Assessing the suitability of Yucca Mtn., NV as a potential repository for high-level nuclear waste requires the means to simulate ion-exchange behavior of zeolites. Vanselow and Gapon convention cation-exchange models have been added to geochemical modeling codes EQ3NR/EQ6, allowing exchange to be modeled for up to three exchangers or a single exchanger with three independent sites. Solid-solution models that are numerically equivalent to the ion-exchange models were derived and also implemented in the code. The Gapon model is inconsistent with experimental adsorption isotherms of trace components in clinoptilolite. A one-site Vanselow model can describe adsorption of Cs or Sr on clinoptilolite, but a two-site Vanselow exchange model is necessary to describe K contents of natural clinoptilolites.
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Subgrid Combustion Modeling for the Next Generation National Combustion Code
NASA Technical Reports Server (NTRS)
Menon, Suresh; Sankaran, Vaidyanathan; Stone, Christopher
2003-01-01
In the first year of this research, a subgrid turbulent mixing and combustion methodology developed earlier at Georgia Tech has been provided to researchers at NASA/GRC for incorporation into the next generation National Combustion Code (called NCCLES hereafter). A key feature of this approach is that scalar mixing and combustion processes are simulated within the LES grid using a stochastic 1D model. The subgrid simulation approach recovers locally molecular diffusion and reaction kinetics exactly without requiring closure and thus, provides an attractive feature to simulate complex, highly turbulent reacting flows of interest. Data acquisition algorithms and statistical analysis strategies and routines to analyze NCCLES results have also been provided to NASA/GRC. The overall goal of this research is to systematically develop and implement LES capability into the current NCC. For this purpose, issues regarding initialization and running LES are also addressed in the collaborative effort. In parallel to this technology transfer effort (that is continuously on going), research has also been underway at Georgia Tech to enhance the LES capability to tackle more complex flows. In particular, subgrid scalar mixing and combustion method has been evaluated in three distinctly different flow field in order to demonstrate its generality: (a) Flame-Turbulence Interactions using premixed combustion, (b) Spatially evolving supersonic mixing layers, and (c) Temporal single and two-phase mixing layers. The configurations chosen are such that they can be implemented in NCCLES and used to evaluate the ability of the new code. Future development and validation will be in spray combustion in gas turbine engine and supersonic scalar mixing.
An Adaptive Code for Radial Stellar Model Pulsations
NASA Astrophysics Data System (ADS)
Buchler, J. Robert; Kolláth, Zoltán; Marom, Ariel
1997-09-01
We describe an implicit 1-D adaptive mesh hydrodynamics code that is specially tailored for radial stellar pulsations. In the Lagrangian limit the code reduces to the well tested Fraley scheme. The code has the useful feature that unwanted, long lasting transients can be avoided by smoothly switching on the adaptive mesh features starting from the Lagrangean code. Thus, a limit cycle pulsation that can readily be computed with the relaxation method of Stellingwerf will converge in a few tens of pulsation cycles when put into the adaptive mesh code. The code has been checked with two shock problems, viz. Noh and Sedov, for which analytical solutions are known, and it has been found to be both accurate and stable. Superior results were obtained through the solution of the total energy (gravitational + kinetic + internal) equation rather than that of the internal energy only.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the
Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.
1981-03-01
The Comparative Thermal-Hydraulic Evaluation of Steam Generators program represents an analytical investigation of the thermal-hydraulic characteristics of four PWR steam generators. The analytical tool utilized in this investigation is the CALIPSOS code, a three-dimensional flow distribution code. This report presents the steady state thermal-hydraulic characteristics on the secondary side of a Westinghouse Model 51 steam generator. Details of the CALIPSOS model with accompanying assumptions, operating parameters, and transport correlations are identified. Comprehensive graphical and numerical results are presented to facilitate the desired comparison with other steam generators analyzed by the same flow distribution code.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Methodology Using MELCOR Code to Model Proposed Hazard Scenario
Gavin Hawkley
2010-07-01
This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.
Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization
NASA Astrophysics Data System (ADS)
Luo, Chuanfu; Sommer, Jens-Uwe
2009-08-01
We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program
ICRCCM Phase 2: Verification and calibration of radiation codes in climate models
Ellingson, R.G.; Wiscombe, W.J.; Murcray, D.; Smith, W.; Strauch, R.
1992-01-01
Following the finding by the InterComparison of Radiation Codes used in Climate Models (ICRCCM) of large differences among fluxes predicted by sophisticated radiation models that could not be sorted out because of the lack of a set of accurate atmospheric spectral radiation data measured simultaneously with the important radiative properties of the atmosphere, our team of scientists proposed to remedy the situation by carrying out a comprehensive program of measurement and analysis called SPECTRE (Spectral Radiance Experiment). The data collected during SPECTRE form the test bed for the second phase of ICRCCM, namely verification and calibration of radiation codes used in climate models. This should lead to more accurate radiation models for use in parameterizing climate models, which in turn play a key role in the prediction of trace-gas greenhouse effects. This report summarizes the activities of our group during the project's Third year to meet our stated objectives. The report is divided into three sections entitled: SPECTRE Activities, ICRCCM Activities, and summary information. The section on SPECTRE activities summarizes the field portion of the project during 1991, and the data reduction/analysis performed by the various participants. The section on ICRCCM activities summarizes our initial attempts to select data for distribution to ICRCCM participants and at comparison of observations with calculations as will be done by the ICRCCM participants. The Summary Information section lists data concerning publications, presentations, graduate students supported, and post-doctoral appointments during the project.
Modeling the Pion Generalized Parton Distribution
NASA Astrophysics Data System (ADS)
Mezrag, C.
2016-02-01
We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ansätze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.
Analytic modeling of aerosol size distributions
NASA Technical Reports Server (NTRS)
Deepack, A.; Box, G. P.
1979-01-01
Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.
Evolutionary model of the personal income distribution
NASA Astrophysics Data System (ADS)
Kaldasch, Joachim
2012-11-01
The aim of this work is to develop a qualitative picture of the personal income distribution. Treating an economy as a self-organized system the key idea of the model is that the income distribution contains competitive and non-competitive contributions. The presented model distinguishes between three main income classes. 1. Capital income from private firms is shown to be the result of an evolutionary competition between products. A direct consequence of this competition is Gibrat’s law suggesting a lognormal income distribution for small private firms. Taking into account an additional preferential attachment mechanism for large private firms the income distribution is supplemented by a power law (Pareto) tail. 2. Due to the division of labor a diversified labor market is seen as a non-competitive market. In this case wage income exhibits an exponential distribution. 3. Also included is income from a social insurance system. It can be approximated by a Gaussian peak. A consequence of this theory is that for short time intervals a fixed ratio of total labor (total capital) to net income exists (Cobb-Douglas relation). A comparison with empirical high resolution income data confirms this pattern of the total income distribution. The theory suggests that competition is the ultimate origin of the uneven income distribution.
Comparison of Ramsauer and Optical Model Neutron Angular Distributions
McNabb, D P; Anderson, J D; Bauer, R W; Dietrich, F S; Grimes, S M; Hagmann, C A
2004-09-30
The nuclear Ramsauer model is a semi-classical, analytic approximation to nucleon-nucleus scattering that reproduces total cross section data at the 1% level for A > 40, E{sub n} = 5-60 MeV with 7-10 parameters. A quick overview of the model is given, demonstrating the model's utility in nuclear data evaluation. The Ramsauer model predictions for reaction cross section, elastic cross section, and elastic scattering angular distributions are considered. In a recent paper it has been shown that the nuclear Ramsauer model does not do well in predicting details of the angular distribution of neutron elastic scattering for incident energies of less than 60 MeV for {sup 208}Pb. However, in this contribution it is demonstrated that the default angular bin dispersion most widely used in Monte Carlo transport codes is such that the observed differences in angular shapes are on too fine a scale to affect transport calculations. Simple studies indicate that 512-2048 bins are necessary to achieve the dispersion required for calculations to be sensitive to the observed discrepancies in angular distributions.
Comparison of Ramsauer and Optical Model Neutron Angular Distributions
McNabb, D.P.; Anderson, J.D.; Bauer, R.W.; Dietrich, F.S.; Hagmann, C.A.; Grimes, S.M.
2005-05-24
The nuclear Ramsauer model is a semi-classical, analytic approximation to nucleon-nucleus scattering that reproduces total cross-section data at the 1% level for A > 40, En = 5-60 MeV with 7-10 parameters. A quick overview of the model is given, demonstrating the model's utility in nuclear data evaluation. The Ramsauer model predictions for reaction cross section, elastic cross section, and elastic scattering angular distributions are considered. In a recent paper it has been shown that the nuclear Ramsauer model does not do well in predicting details of the angular distribution of neutron elastic scattering for incident energies of less than 60 MeV for 208Pb. However, in this contribution it is demonstrated that the default angular bin dispersion most widely used in Monte Carlo transport codes is such that the observed differences in angular shapes are on too fine a scale to affect transport calculations. Simple studies indicate that 512-2048 bins are necessary to achieve the dispersion required for calculations to be sensitive to the observed discrepancies in angular distributions.
Distributed knowledge model for multiple intelligent agents
Li, Y.P.
1987-01-01
In the Distributed AI context, there have been some general principles developed to manage the problem solving activities of multiple agents. But there is not yet a domain-independent structure available for organizing multiple agents and managing of the interactions among agents. An organization metaphor is proposed to establish the hierarchical organization as the preferable takes environment for the decision-oriented applications of Distributed AI. As such, distributed problem solving is modeled as organizational problem solving. A generic structure for multiple intelligent agents is then developed. The organization metaphor is a problem-solving method. It outlines the organizational principles for distributed problem solving. However, a problem-solving model does not specify how it itself is to be realized as a computational entity. Therefore, a distributed knowledge model (DKM) is proposed to define the computational constructs in order to realize a distributed problem-solving environment for multiple intelligent agents. A prototype was implemented to show the feasibility of building a multi-agent environment based on DKM.
Semantic-preload video model based on VOP coding
NASA Astrophysics Data System (ADS)
Yang, Jianping; Zhang, Jie; Chen, Xiangjun
2013-03-01
In recent years, in order to reduce semantic gap which exists between high-level semantics and low-level features of video when the human understanding image or video, people mostly try the method of video annotation where in signal's downstream, namely further (again) attach labels to the content in video-database. Few people focus on the idea that: Use limited interaction and the means of comprehensive segmentation (including optical technologies) from the front-end of collection of video information (i.e. video camera), with video semantics analysis technology and corresponding concepts sets (i.e. ontology) which belong in a certain domain, as well as story shooting script and the task description of scene shooting etc; Apply different-level semantic descriptions to enrich the attributes of video object and the attributes of image region, then forms a new video model which is based on Video Object Plan (VOP) Coding. This model has potential intellectualized features, and carries a large amount of metadata, and embedded intermediate-level semantic concept into every object. This paper focuses on the latter, and presents a framework of a new video model. At present, this new video model is temporarily named "Video Model of Semantic-Preloaded or Semantic-Preload Video Model (simplified into VMoSP or SPVM)". This model mainly researches how to add labeling to video objects and image regions in real time, here video object and image region are usually used intermediate semantic labeling, and this work is placed on signal's upstream (i.e. video capture production stage). Because of the research needs, this paper also tries to analyses the hierarchic structure of video, and divides the hierarchic structure into nine hierarchy semantic levels, of course, this nine hierarchy only involved in video production process. In addition, the paper also point out that here semantic level tagging work (i.e. semantic preloading) only refers to the four middle-level semantic. All in
Application distribution model and related security attacks in VANET
NASA Astrophysics Data System (ADS)
Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian
2013-03-01
In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.
Applications of Transport/Reaction Codes to Problems in Cell Modeling
MEANS, SHAWN A.; RINTOUL, MARK DANIEL; SHADID, JOHN N.
2001-11-01
We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.
Distributed Wind Diffusion Model Overview (Presentation)
Preus, R.; Drury, E.; Sigrin, B.; Gleason, M.
2014-07-01
Distributed wind market demand is driven by current and future wind price and performance, along with several non-price market factors like financing terms, retail electricity rates and rate structures, future wind incentives, and others. We developed a new distributed wind technology diffusion model for the contiguous United States that combines hourly wind speed data at 200m resolution with high resolution electricity load data for various consumer segments (e.g., residential, commercial, industrial), electricity rates and rate structures for utility service territories, incentive data, and high resolution tree cover. The model first calculates the economics of distributed wind at high spatial resolution for each market segment, and then uses a Bass diffusion framework to estimate the evolution of market demand over time. The model provides a fundamental new tool for characterizing how distributed wind market potential could be impacted by a range of future conditions, such as electricity price escalations, improvements in wind generator performance and installed cost, and new financing structures. This paper describes model methodology and presents sample results for distributed wind market potential in the contiguous U.S. through 2050.
Monotonicity-constrained species distribution models.
Hofner, Benjamin; Müller, Jörg; Hothorn, Torsten
2011-10-01
Flexible modeling frameworks for species distribution models based on generalized additive models that allow for smooth, nonlinear effects and interactions are of increasing importance in ecology. Commonly, the flexibility of such smooth function estimates is controlled by means of penalized estimation procedures. However, the actual shape remains unspecified. In many applications, this is not desirable as researchers have a priori assumptions on the shape of the estimated effects, with monotonicity being the most important. Here we demonstrate how monotonicity constraints can be incorporated in a recently proposed flexible framework for species distribution models. Our proposal allows monotonicity constraints to be imposed on smooth effects and on ordinal, categorical variables using an additional asymmetric L2 penalty. Model estimation and variable selection for Red Kite (Milvus milvus) breeding was conducted using the flexible boosting framework implemented in R package mboost. PMID:22073780
Spectral and Structure Modeling of Low and High Mass Young Stars Using a Radiative Trasnfer Code
NASA Astrophysics Data System (ADS)
Robson Rocha, Will; Pilling, Sergio
The spectroscopy data from space telescopes (ISO, Spitzer, Herchel) shows that in addition to dust grains (e.g. silicates), there is also the presence of the frozen molecular species (astrophysical ices, such as H _{2}O, CO, CO _{2}, CH _{3}OH) in the circumstellar environments. In this work we present a study of the modeling of low and high mass young stellar objects (YSOs), where we highlight the importance in the use of the astrophysical ices processed by the radiation (UV, cosmic rays) comes from stars in formation process. This is important to characterize the physicochemical evolution of the ices distributed by the protostellar disk and its envelope in some situations. To perform this analysis, we gathered (i) observational data from Infrared Space Observatory (ISO) related with low mass protostar Elias29 and high mass protostar W33A, (ii) absorbance experimental data in the infrared spectral range used to determinate the optical constants of the materials observed around this objects and (iii) a powerful radiative transfer code to simulate the astrophysical environment (RADMC-3D, Dullemond et al, 2012). Briefly, the radiative transfer calculation of the YSOs was done employing the RADMC-3D code. The model outputs were the spectral energy distribution and theoretical images in different wavelengths of the studied objects. The functionality of this code is based on the Monte Carlo methodology in addition to Mie theory for interaction among radiation and matter. The observational data from different space telescopes was used as reference for comparison with the modeled data. The optical constants in the infrared, used as input in the models, were calculated directly from absorbance data obtained in the laboratory of both unprocessed and processed simulated interstellar samples by using NKABS code (Rocha & Pilling 2014). We show from this study that some absorption bands in the infrared, observed in the spectrum of Elias29 and W33A can arises after the ices
Code and Solution Verification of 3D Numerical Modeling of Flow in the Gust Erosion Chamber
NASA Astrophysics Data System (ADS)
Yuen, A.; Bombardelli, F. A.
2014-12-01
Erosion microcosms are devices commonly used to investigate the erosion and transport characteristics of sediments at the bed of rivers, lakes, or estuaries. In order to understand the results these devices provide, the bed shear stress and flow field need to be accurately described. In this research, the UMCES Gust Erosion Microcosm System (U-GEMS) is numerically modeled using Finite Volume Method. The primary aims are to simulate the bed shear stress distribution at the surface of the sediment core/bottom of the microcosm, and to validate the U-GEMS produces uniform bed shear stress at the bottom of the microcosm. The mathematical model equations are solved by on a Cartesian non-uniform grid. Multiple numerical runs were developed with different input conditions and configurations. Prior to developing the U-GEMS model, the General Moving Objects (GMO) model and different momentum algorithms in the code were verified. Code verification of these solvers was done via simulating the flow inside the top wall driven square cavity on different mesh sizes to obtain order of convergence. The GMO model was used to simulate the top wall in the top wall driven square cavity as well as the rotating disk in the U-GEMS. Components simulated with the GMO model were rigid bodies that could have any type of motion. In addition cross-verification was conducted as results were compared with numerical results by Ghia et al. (1982), and good agreement was found. Next, CFD results were validated by simulating the flow within the conventional microcosm system without suction and injection. Good agreement was found when the experimental results by Khalili et al. (2008) were compared. After the ability of the CFD solver was proved through the above code verification steps. The model was utilized to simulate the U-GEMS. The solution was verified via classic mesh convergence study on four consecutive mesh sizes, in addition to that Grid Convergence Index (GCI) was calculated and based on
NASA Astrophysics Data System (ADS)
Athanasopoulou, Labrini; Athanasopoulos, Stavros; Karamanos, Kostas; Almirantis, Yannis
2010-11-01
Statistical methods, including block entropy based approaches, have already been used in the study of long-range features of genomic sequences seen as symbol series, either considering the full alphabet of the four nucleotides or the binary purine or pyrimidine character set. Here we explore the alternation of short protein-coding segments with long noncoding spacers in entire chromosomes, focusing on the scaling properties of block entropy. In previous studies, it has been shown that the sizes of noncoding spacers follow power-law-like distributions in most chromosomes of eukaryotic organisms from distant taxa. We have developed a simple evolutionary model based on well-known molecular events (segmental duplications followed by elimination of most of the duplicated genes) which reproduces the observed linearity in log-log plots. The scaling properties of block entropy H(n) have been studied in several works. Their findings suggest that linearity in semilogarithmic scale characterizes symbol sequences which exhibit fractal properties and long-range order, while this linearity has been shown in the case of the logistic map at the Feigenbaum accumulation point. The present work starts with the observation that the block entropy of the Cantor-like binary symbol series scales in a similar way. Then, we perform the same analysis for the full set of human chromosomes and for several chromosomes of other eukaryotes. A similar but less extended linearity in semilogarithmic scale, indicating fractality, is observed, while randomly formed surrogate sequences clearly lack this type of scaling. Genomic sequences always present entropy values much lower than their random surrogates. Symbol sequences produced by the aforementioned evolutionary model follow the scaling found in genomic sequences, thus corroborating the conjecture that “segmental duplication-gene elimination” dynamics may have contributed to the observed long rangeness in the coding or noncoding alternation in
On distributed memory MPI-based parallelization of SPH codes in massive HPC context
NASA Astrophysics Data System (ADS)
Oger, G.; Le Touzé, D.; Guibert, D.; de Leffe, M.; Biddiscombe, J.; Soumagne, J.; Piccinali, J.-G.
2016-03-01
Most of particle methods share the problem of high computational cost and in order to satisfy the demands of solvers, currently available hardware technologies must be fully exploited. Two complementary technologies are now accessible. On the one hand, CPUs which can be structured into a multi-node framework, allowing massive data exchanges through a high speed network. In this case, each node is usually comprised of several cores available to perform multithreaded computations. On the other hand, GPUs which are derived from the graphics computing technologies, able to perform highly multi-threaded calculations with hundreds of independent threads connected together through a common shared memory. This paper is primarily dedicated to the distributed memory parallelization of particle methods, targeting several thousands of CPU cores. The experience gained clearly shows that parallelizing a particle-based code on moderate numbers of cores can easily lead to an acceptable scalability, whilst a scalable speedup on thousands of cores is much more difficult to obtain. The discussion revolves around speeding up particle methods as a whole, in a massive HPC context by making use of the MPI library. We focus on one particular particle method which is Smoothed Particle Hydrodynamics (SPH), one of the most widespread today in the literature as well as in engineering.
Entanglement distribution over quantum code-division multiple-access networks
NASA Astrophysics Data System (ADS)
Zhu, Chang-long; Yang, Nan; Liu, Yu-xi; Nori, Franco; Zhang, Jing
2015-10-01
We present a method for quantum entanglement distribution over a so-called code-division multiple-access network, in which two pairs of users share the same quantum channel to transmit information. The main idea of this method is to use different broadband chaotic phase shifts, generated by electro-optic modulators and chaotic Colpitts circuits, to encode the information-bearing quantum signals coming from different users and then recover the masked quantum signals at the receiver side by imposing opposite chaotic phase shifts. The chaotic phase shifts given to different pairs of users are almost uncorrelated due to the randomness of chaos and thus the quantum signals from different pair of users can be distinguished even when they are sent via the same quantum channel. It is shown that two maximally entangled states can be generated between two pairs of users by our method mediated by bright coherent lights, which can be more easily implemented in experiments compared with single-photon lights. Our method is robust under the channel noises if only the decay rates of the information-bearing fields induced by the channel noises are not quite high. Our study opens up new perspectives for addressing and transmitting quantum information in future quantum networks.
The spatial distribution of fixed mutations within genes coding for proteins
NASA Technical Reports Server (NTRS)
Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.
1983-01-01
An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.
Cost effectiveness of the 1993 Model Energy Code in Colorado
Lucas, R.G.
1995-06-01
This report documents an analysis of the cost effectiveness of the Council of American Building Officials` 1993 Model Energy Code (MEC) building thermal-envelope requirements for single-family homes in Colorado. The goal of this analysis was to compare the cost effectiveness of the 1993 MEC to current construction practice in Colorado based on an objective methodology that determined the total life-cycle cost associated with complying with the 1993 MEC. This analysis was performed for the range of Colorado climates. The costs and benefits of complying with the 1993 NIEC were estimated from the consumer`s perspective. The time when the homeowner realizes net cash savings (net positive cash flow) for homes built in accordance with the 1993 MEC was estimated to vary from 0.9 year in Steamboat Springs to 2.4 years in Denver. Compliance with the 1993 MEC was estimated to increase first costs by $1190 to $2274, resulting in an incremental down payment increase of $119 to $227 (at 10% down). The net present value of all costs and benefits to the home buyer, accounting for the mortgage and taxes, varied from a savings of $1772 in Springfield to a savings of $6614 in Steamboat Springs. The ratio of benefits to costs ranged from 2.3 in Denver to 3.8 in Steamboat Springs.
Fast-coding robust motion estimation model in a GPU
NASA Astrophysics Data System (ADS)
García, Carlos; Botella, Guillermo; de Sande, Francisco; Prieto-Matias, Manuel
2015-02-01
Nowadays vision systems are used with countless purposes. Moreover, the motion estimation is a discipline that allow to extract relevant information as pattern segmentation, 3D structure or tracking objects. However, the real-time requirements in most applications has limited its consolidation, considering the adoption of high performance systems to meet response times. With the emergence of so-called highly parallel devices known as accelerators this gap has narrowed. Two extreme endpoints in the spectrum of most common accelerators are Field Programmable Gate Array (FPGA) and Graphics Processing Systems (GPU), which usually offer higher performance rates than general propose processors. Moreover, the use of GPUs as accelerators involves the efficient exploitation of any parallelism in the target application. This task is not easy because performance rates are affected by many aspects that programmers should overcome. In this paper, we evaluate OpenACC standard, a programming model with directives which favors porting any code to a GPU in the context of motion estimation application. The results confirm that this programming paradigm is suitable for this image processing applications achieving a very satisfactory acceleration in convolution based problems as in the well-known Lucas & Kanade method.
Comparison of Ramsauer and Optical Model Neutron Angular Distributions
McNabb, D P; Anderson, J D; Bauer, R W; Dietrich, F S; Grimes, S M; Hagmann, C A
2004-04-20
In a recent paper it has been shown that the nuclear Ramsauer model does not do well in representing details of the angular distribution of neutron elastic scattering for incident energies of less than 60 MeV for {sup 208}Pb. We show that the default angular bin dispersion most widely used in Monte Carlo transport codes is such that the observed differences in angular shapes are on too fine a scale to affect transport calculations. The effect of increasing the number of Monte Carlo angle bins is studied to determine the dispersion necessary for calculations to be sensitive to the observed discrepancies in angular distributions. We also show that transport calculations are sensitive to differences in the elastic scattering cross section given by recent fits of {sup 208}Pb data compared with older fits.
Hot Water Distribution System Model Enhancements
Hoeschele, M.; Weitzel, E.
2012-11-01
This project involves enhancement of the HWSIM distribution system model to more accurately model pipe heat transfer. Recent laboratory testing efforts have indicated that the modeling of radiant heat transfer effects is needed to accurately characterize piping heat loss. An analytical methodology for integrating radiant heat transfer was implemented with HWSIM. Laboratory test data collected in another project was then used to validate the model for a variety of uninsulated and insulated pipe cases (copper, PEX, and CPVC). Results appear favorable, with typical deviations from lab results less than 8%.
Brannon, R.M.; Wong, M.K.
1996-08-01
A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.
A Combinatorial Geometry Code System with Model Testing Routines.
1982-10-08
GIFT, Geometric Information For Targets code system, is used to mathematically describe the geometry of a three-dimensional vehicle such as a tank, truck, or helicopter. The geometric data generated is merged in vulnerability computer codes with the energy effects data of a selected @munition to simulate the probabilities of malfunction or destruction of components when it is attacked by the selected munition. GIFT options include those which graphically display the vehicle, those which check themore » correctness of the geometry data, those which compute physical characteristics of the vehicle, and those which generate the geometry data used by vulnerability codes.« less
Modeling utilization distributions in space and time
Keating, K.A.; Cherry, S.
2009-01-01
W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.
A Distributive Model of Treatment Acceptability
ERIC Educational Resources Information Center
Carter, Stacy L.
2008-01-01
A model of treatment acceptability is proposed that distributes overall treatment acceptability into three separate categories of influence. The categories are comprised of societal influences, consultant influences, and influences associated with consumers of treatments. Each of these categories are defined and their inter-relationships within…
Modeling global lightning distributions in a general circulation model
NASA Technical Reports Server (NTRS)
Price, Colin; Rind, David
1994-01-01
A general circulation model (GCM) is used to model global lightning distributions and frequencies. Both total and cloud-to-ground lightning frequencies are modeled using parameterizations that relate the depth of convective clouds to lightning frequencies. The model's simulations of lightning distributions in time and space show good agreement with available observations. The model's annual mean climatology shows a global lightning frequency of 77 flashes per second, with cloud-to-ground lightning making up 25% of the total. The maximum lightning activity in the GCM occurs during the Northern Hemisphere summer, with approximately 91% of all lightning occurring over continental and coastal regions.
Recent Developments of the Nuclear Reaction Model Code EMPIRE
Herman, M.; Oblozinsky, P.; Capote, R.; Trkov, A.; Zerkin, V.; Sin, M.; Ventura, A.
2005-05-24
Recent extensions and improvements of the EMPIRE code system are outlined. They add to the code new capabilities such as fission of actinides, preequilibrium emission of clusters, photo-nuclear reactions, and reactions on excited targets. These features, along with improved ENDF formatting, exclusive spectra, and recoils make the forthcoming 2.19 release a complete tool for evaluation of nuclear data at incident energies above the resonance region.
A Robust Model-Based Coding Technique for Ultrasound Video
NASA Technical Reports Server (NTRS)
Docef, Alen; Smith, Mark J. T.
1995-01-01
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
Droplet distribution models for visibility calculation
NASA Astrophysics Data System (ADS)
Bernardin, F.; Colomb, M.; Egal, F.; Morange, P.; Boreux, J.-J.
2010-07-01
More efficient predictions of fog occurrence and visibility are required in order to improve both safety and traffic management in critical adverse weather situations. Observation and simulation of the fog characteristics contribute to a better understanding of the phenomena and to adapt technical solutions against visibility reduction. The simulation of visibility reduction by fog condition using light scattering model depends on the size and concentration of droplets. Therefore it is necessary to include in the software some functions for the droplet distribution model rather than some data file of single measurement. The aim of the present work is to revisit some droplet distribution models of fog (Shettle and Fenn 1979) in order to actualise them by using recent experimental measures. Indeed the models mentioned above were established thanks to experimental data obtained with sensors of 70’s. Actual sensors are able to take into account droplets with radius 0.2 μm which was not the case with older sensors. A surface observation campaign was carried out at Palaiseau and Toulouse, France, between 2006 and 2008. These experiments allowed to collect microphysical data of fog and particularly droplet distributions of the fog, thanks to a "Palas" optical granulometer. Based on these data an analysis is carried out in order to provide a droplet distribution model. The first approach consists in testing the four Gamma laws proposed by Shettle and Fenn (1979). The adjustment of coefficients allows changing the characteristics from advection to radiation fog. These functions did not fit the new set of data collected with the Palas sensor. New algorithms based on Gamma and Lognormal laws are proposed and discussed in comparison to the previous models. For a road application, the coefficients of the proposed models are evaluated for different classes of visibility, ranged from 50 to 200 meters.
Aerosol Behavior Log-Normal Distribution Model.
2001-10-22
HAARM3, an acronym for Heterogeneous Aerosol Agglomeration Revised Model 3, is the third program in the HAARM series developed to predict the time-dependent behavior of radioactive aerosols under postulated LMFBR accident conditions. HAARM3 was developed to include mechanisms of aerosol growth and removal which had not been accounted for in the earlier models. In addition, experimental measurements obtained on sodium oxide aerosols have been incorporated in the code. As in HAARM2, containment gas temperature, pressure,more » and temperature gradients normal to interior surfaces are permitted to vary with time. The effects of reduced density on sodium oxide agglomerate behavior and of nonspherical shape of particles on aerosol behavior mechanisms are taken into account, and aerosol agglomeration due to turbulent air motion is considered. Also included is a capability to calculate aerosol concentration attenuation factors and to restart problems requiring long computing times.« less
Modelling 2001 lahars at Popocatépetl volcano using FLO2D numerical code
NASA Astrophysics Data System (ADS)
Caballero, L.; Capra, L.
2013-12-01
Popocatépetl volcano is located on the central part of the Transmexican Volcanic Belt. It is one of the most active volcanoes in Mexico and endanger more than 25 million people that lives in its surroundings. In the last months, the renewal of its volcanic activity put into alert scientific community. One of the possible scenarios is the 2001 explosive activity, which was characterized by a 8 km eruptive column and the subsequent formation of pumice flows up to 4 km from the crater. Lahars were generated few hours after, remobilizing the new deposits towards NE flank of the volcano, along Huiloac Gorge, almost reaching Santiago Xalitzintla town (Capra et al., 2004). The occurrence of a similar scenario makes very important to reproduce this event to delimitate accurately lahar hazard zones. In this work, 2001 lahar deposit is modeled using FLO2D numerical code. Geophone data is used to reconstruct initial hydrograph and sediment concentration. Sensitivity study of most important parameters used by this code like Manning, and α and β coefficients was conducted in order to achieve a good simulation. Results obtained were compared with field data and demonstrated a good agreement in thickness and flow distribution. A comparison with previously published data with laharZ program (Muñoz-Salinas, 2009) is also made. Additionally, lahars with fluctuating sediment concentrations but with similar volume are simulated to observe the influence of the rheological behavior on lahar distribution.
Multiple-source models for electron beams of a medical linear accelerator using BEAMDP computer code
Jabbari, Nasrollah; Barati, Amir Hoshang; Rahmatnezhad, Leili
2012-01-01
Aim The aim of this work was to develop multiple-source models for electron beams of the NEPTUN 10PC medical linear accelerator using the BEAMDP computer code. Background One of the most accurate techniques of radiotherapy dose calculation is the Monte Carlo (MC) simulation of radiation transport, which requires detailed information of the beam in the form of a phase-space file. The computing time required to simulate the beam data and obtain phase-space files from a clinical accelerator is significant. Calculation of dose distributions using multiple-source models is an alternative method to phase-space data as direct input to the dose calculation system. Materials and methods Monte Carlo simulation of accelerator head was done in which a record was kept of the particle phase-space regarding the details of the particle history. Multiple-source models were built from the phase-space files of Monte Carlo simulations. These simplified beam models were used to generate Monte Carlo dose calculations and to compare those calculations with phase-space data for electron beams. Results Comparison of the measured and calculated dose distributions using the phase-space files and multiple-source models for three electron beam energies showed that the measured and calculated values match well each other throughout the curves. Conclusion It was found that dose distributions calculated using both the multiple-source models and the phase-space data agree within 1.3%, demonstrating that the models can be used for dosimetry research purposes and dose calculations in radiotherapy. PMID:24377026
A void distribution model-flashing flow
Riznic, J.; Ishii, M.; Afgan, N.
1987-01-01
A new model for flashing flow based on wall nucleations is proposed here and the model predictions are compared with some experimental data. In order to calculate the bubble number density, the bubble number transport equation with a distributed source from the wall nucleation sites was used. Thus it was possible to avoid the usual assumption of a constant bubble number density. Comparisons of the model with the data shows that the model based on the nucleation site density correlation appears to be acceptable to describe the vapor generation in the flashing flow. For the limited data examined, the comparisons show rather satisfactory agreement without using a floating parameter to adjust the model. This result indicated that, at least for the experimental conditions considered here, the mechanistic predictions of the flashing phenomenon is possible on the present wall nucleation based model.
Modeling depth distributions of overland flows
NASA Astrophysics Data System (ADS)
Smith, Mark W.; Cox, Nicholas J.; Bracken, Louise J.
2011-02-01
Hydrological and erosion models use water depth to estimate routing velocity and resultant erosion at each spatial element. Yet the shear stress distribution imposed on the soil surface and any resulting flow detachment and rill incision is controlled by the full probability distribution of depths of overland flow. Terrestrial Laser Scanning (TLS) is used in conjunction with simple field-flume experiments to provide high-resolution measures of overland flow depth-distributions for three semi-arid hillslope transects with differing soil properties. A two-parameter gamma distribution is proposed as the optimum model for depths of both interrill and rill flows. The shape and scale parameters are shown to vary consistently with distance downslope reflecting the morphological signature of runoff processes. The scale parameter is related to the general increase of depths with discharge ( P < 0.0001) as flows gradually concentrate; the shape parameter is more related to the soil surface roughness and potentially provides a control on the rate of depth, but also velocity increase with discharge. Such interactions between surface roughness and overland flows are of crucial importance for flow hydraulics and modeling sediment transport.
Modeling Mosquito Distribution. Impact of the Landscape
NASA Astrophysics Data System (ADS)
Dumont, Y.
2011-09-01
In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide mosquito density estimate and mosquito distribution, taking into account the environment and entomological knowledges. Mosquito dispersal modeling, together with a compartmental approach, leads to a quasilinear parabolic system. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering various landscapes, we show that the environment can have a strong influence on mosquito distribution and, thus, in the efficiency or not of vector control.
Joint physical and numerical modeling of water distribution networks.
Zimmerman, Adam; O'Hern, Timothy John; Orear, Leslie Jr.; Kajder, Karen C.; Webb, Stephen Walter; Cappelle, Malynda A.; Khalsa, Siri Sahib; Wright, Jerome L.; Sun, Amy Cha-Tien; Chwirka, J. Benjamin; Hartenberger, Joel David; McKenna, Sean Andrew; van Bloemen Waanders, Bart Gustaaf; McGrath, Lucas K.; Ho, Clifford Kuofei
2009-01-01
This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in some cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, R.
1992-01-01
The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems.
Distributed earth model/orbiter simulation
NASA Technical Reports Server (NTRS)
Geisler, Erik; Mcclanahan, Scott; Smith, Gary
1989-01-01
Distributed Earth Model/Orbiter Simulation (DEMOS) is a network based application developed for the UNIX environment that visually monitors or simulates the Earth and any number of orbiting vehicles. Its purpose is to provide Mission Control Center (MCC) flight controllers with a visually accurate three dimensional (3D) model of the Earth, Sun, Moon and orbiters, driven by real time or simulated data. The project incorporates a graphical user interface, 3D modelling employing state-of-the art hardware, and simulation of orbital mechanics in a networked/distributed environment. The user interface is based on the X Window System and the X Ray toolbox. The 3D modelling utilizes the Programmer's Hierarchical Interactive Graphics System (PHIGS) standard and Raster Technologies hardware for rendering/display performance. The simulation of orbiting vehicles uses two methods of vector propagation implemented with standard UNIX/C for portability. Each part is a distinct process that can run on separate nodes of a network, exploiting each node's unique hardware capabilities. The client/server communication architecture of the application can be reused for a variety of distributed applications.
The non-power model of the genetic code: a paradigm for interpreting genomic information.
Gonzalez, Diego Luis; Giannerini, Simone; Rosa, Rodolfo
2016-03-13
In this article, we present a mathematical framework based on redundant (non-power) representations of integer numbers as a paradigm for the interpretation of genomic information. The core of the approach relies on modelling the degeneracy of the genetic code. The model allows one to explain many features and symmetries of the genetic code and to uncover hidden symmetries. Also, it provides us with new tools for the analysis of genomic sequences. We review briefly three main areas: (i) the Euplotid nuclear code, (ii) the vertebrate mitochondrial code, and (iii) the main coding/decoding strategies used in the three domains of life. In every case, we show how the non-power model is a natural unified framework for describing degeneracy and deriving sound biological hypotheses on protein coding. The approach is rooted on number theory and group theory; nevertheless, we have kept the technical level to a minimum by focusing on key concepts and on the biological implications. PMID:26857679
A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes
NASA Astrophysics Data System (ADS)
Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.
2000-10-01
Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.
Physical Model for the Evolution of the Genetic Code
NASA Astrophysics Data System (ADS)
Yamashita, Tatsuro; Narikiyo, Osamu
2011-12-01
Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.
Modeling Emergent Macrophyte Distributions: Including Sub-dominant Species
Mixed stands of emergent vegetation are often present following drawdowns but models of wetland plant distributions fail to include subdominant species when predicting distributions. Three variations of a spatial plant distribution cellular automaton model were developed to explo...
Grid-Xinanjiang Distributed Hydrologic Model
NASA Astrophysics Data System (ADS)
Li, Z.; Yao, C.; Yu, Z.
2009-12-01
The grid-based distributed Xinanjiang (Grid-Xinanjiang) model by combining the well-tested conceptual rainfall-runoff model and the physically based flow routing model has been developed for hydrologic processes simulation and flood forecasting. The DEM is utilized to derive the flow direction, routing sequencing, hillslope and channel slopes. The developed model includes canopy interception, direct channel precipitation, evapotranspiration, as well as runoff generation via saturation excess mechanism. The diffusion wave considering the influent of upstream inflow, direct channel precipitation and flow partition to the channels is developed to route the hillslope and channel flow on a cell basis. The Grid-Xinanjiang model is applied at a 1-km grid scale in a nested basin located in Huaihe basin, China. The basin with the drainage area of 2692.7 km2, contains five internal points where observed streamflow data are available, and is used to evaluate the developed model for its’ ability on the simulation of hydrologic processes within the basin. Calibration and verification of the Grid-Xinanjiang model are carried out at both daily and hourly time steps. The model is assessed by comparing streamflow and water stage simulation to observations at the basin outlet and gauging stations within the basin and also compared with these simulated with the original Xinanjiang model. The results indicate that the parameter estimation approach is efficient and the developed model can forecast the streamflow and stage hydrograph well.
Coding of odors by temporal binding within a model network of the locust antennal lobe
Patel, Mainak J.; Rangan, Aaditya V.; Cai, David
2013-01-01
The locust olfactory system interfaces with the external world through antennal receptor neurons (ORNs), which represent odors in a distributed, combinatorial manner. ORN axons bundle together to form the antennal nerve, which relays sensory information centrally to the antennal lobe (AL). Within the AL, an odor generates a dynamically evolving ensemble of active cells, leading to a stimulus-specific temporal progression of neuronal spiking. This experimental observation has led to the hypothesis that an odor is encoded within the AL by a dynamically evolving trajectory of projection neuron (PN) activity that can be decoded piecewise to ascertain odor identity. In order to study information coding within the locust AL, we developed a scaled-down model of the locust AL using Hodgkin–Huxley-type neurons and biologically realistic connectivity parameters and current components. Using our model, we examined correlations in the precise timing of spikes across multiple neurons, and our results suggest an alternative to the dynamic trajectory hypothesis. We propose that the dynamical interplay of fast and slow inhibition within the locust AL induces temporally stable correlations in the spiking activity of an odor-dependent neural subset, giving rise to a temporal binding code that allows rapid stimulus detection by downstream elements. PMID:23630495
EXTENSION OF THE NUCLEAR REACTION MODEL CODE EMPIRE TO ACTINIDES NUCLEAR DATA EVALUATION.
CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.
2007-04-22
Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.
Error-correcting code on a cactus: A solvable model
NASA Astrophysics Data System (ADS)
Vicente, R.; Saad, D.; Kabashima, Y.
2000-09-01
An exact solution to a family of parity check error-correcting codes is provided by mapping the problem onto a Husimi cactus. The solution obtained in the thermodynamic limit recovers the replica-symmetric theory results and provides a very good approximation to finite systems of moderate size. The probability propagation decoding algorithm emerges naturally from the analysis. A phase transition between decoding success and failure phases is found to coincide with an information-theoretic upper bound. The method is employed to compare Gallager and MN codes.
On the validation of a code and a turbulence model appropriate to circulation control airfoils
NASA Technical Reports Server (NTRS)
Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.
1988-01-01
A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.
Computer model of crossed-field devices using moving wavelength codes
McDowell, H.L.
1996-12-31
DECFA and DEMAG are moving wavelength, particle in cell codes for modeling crossed-field amplifiers (CFAs) and magnetrons. The codes model the interaction between a single traveling wave on a smooth anode surface and the space charge in crossed electric and magnetic fields. The detailed anode vane tip geometry is not included in the model. Periodic boundary conditions are imposed on the sides of the moving interaction wavelength thereby imposing the wave periodicity on the solution. In spite of the assumptions involved, the codes successfully model the performance of many existing CFAs and magnetrons. Correlation of computer model and experimental results will be presented for typical devices. The only failures of the codes to correlate with device performance have occurred for small gap anode vane tip geometries which degrade the efficiency of electron collection. To avoid such possibilities, the simulation codes need to be supplemented with trajectory tracing studies of electrons between anode vanes. Results of such studies will be presented.
Wangerin, K; Culbertson, C N; Jevremovic, T
2005-08-01
The goal of this study was to evaluate the COG Monte Carlo radiation transport code, developed and tested by Lawrence Livermore National Laboratory, for gadolinium neutron capture therapy (GdNCT) related modeling. The validity of COG NCT model has been established for this model, and here the calculation was extended to analyze the effect of various gadolinium concentrations on dose distribution and cell-kill effect of the GdNCT modality and to determine the optimum therapeutic conditions for treating brain cancers. The computational results were compared with the widely used MCNP code. The differences between the COG and MCNP predictions were generally small and suggest that the COG code can be applied to similar research problems in NCT. Results for this study also showed that a concentration of 100 ppm gadolinium in the tumor was most beneficial when using an epithermal neutron beam. PMID:16010124
Ramshaw, J D
2000-10-01
A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.
Spatio-temporal Modeling of Mosquito Distribution
NASA Astrophysics Data System (ADS)
Dumont, Y.; Dufourd, C.
2011-11-01
We consider a quasilinear parabolic system to model mosquito displacement. In order to use efficiently vector control tools, like insecticides, and mechanical control, it is necessary to provide density estimates of mosquito populations, taking into account the environment and entomological knowledges. After a brief introduction to mosquito dispersal modeling, we present some theoretical results. Then, considering a compartmental approach, we get a quasilinear system of PDEs. Using the time splitting approach and appropriate numerical methods for each operator, we construct a reliable numerical scheme. Considering vector control scenarii, we show that the environment can have a strong influence on mosquito distribution and in the efficiency of vector control tools.
The Shell-Model Code NuShellX@MSU
Brown, B.A.; Rae, W.D.M.
2014-06-15
Use of the code NuShellX@MSU is outlined. It connects to the ENSDF data files for automatic comparisons to energy level data. Operator overlaps provide predictions for spectroscopic factors, two-nucleon transfer amplitudes, nuclear moments, gamma decay and beta decay.
The APS SASE FEL : modeling and code comparison.
Biedron, S. G.
1999-04-20
A self-amplified spontaneous emission (SASE) free-electron laser (FEL) is under construction at the Advanced Photon Source (APS). Five FEL simulation codes were used in the design phase: GENESIS, GINGER, MEDUSA, RON, and TDA3D. Initial comparisons between each of these independent formulations show good agreement for the parameters of the APS SASE FEL.
Oscillations in SIRS model with distributed delays
NASA Astrophysics Data System (ADS)
Gonçalves, S.; Abramson, G.; Gomes, M. F. C.
2011-06-01
The ubiquity of oscillations in epidemics presents a long standing challenge for the formulation of epidemic models. Whether they are external and seasonally driven, or arise from the intrinsic dynamics is an open problem. It is known that fixed time delays destabilize the steady state solution of the standard SIRS model, giving rise to stable oscillations for certain parameters values. In this contribution, starting from the classical SIRS model, we make a general treatment of the recovery and loss of immunity terms. We present oscillation diagrams (amplitude and period) in terms of the parameters of the model, showing how oscillations can be destabilized by the shape of the distributions of the two characteristic (infectious and immune) times. The formulation is made in terms of delay equations which are both numerically integrated and linearized. Results from simulations are included showing where they support the linear analysis and explaining why not where they do not. Considerations and comparison with real diseases are presented along.
The modeling of core melting and in-vessel corium relocation in the APRIL code
Kim. S.W.; Podowski, M.Z.; Lahey, R.T.
1995-09-01
This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.
Inverse distributed hydrological modelling of Alpine catchments
NASA Astrophysics Data System (ADS)
Kunstmann, H.; Krause, J.; Mayr, S.
2006-06-01
Even in physically based distributed hydrological models, various remaining parameters must be estimated for each sub-catchment. This can involve tremendous effort, especially when the number of sub-catchments is large and the applied hydrological model is computationally expensive. Automatic parameter estimation tools can significantly facilitate the calibration process. Hence, we combined the nonlinear parameter estimation tool PEST with the distributed hydrological model WaSiM. PEST is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. WaSiM is a fully distributed hydrological model using physically based algorithms for most of the process descriptions. WaSiM was applied to the alpine/prealpine Ammer River catchment (southern Germany, 710 km2 in a 100×100 m2 horizontal resolution. The catchment is heterogeneous in terms of geology, pedology and land use and shows a complex orography (the difference of elevation is around 1600 m). Using the developed PEST-WaSiM interface, the hydrological model was calibrated by comparing simulated and observed runoff at eight gauges for the hydrologic year 1997 and validated for the hydrologic year 1993. For each sub-catchment four parameters had to be calibrated: the recession constants of direct runoff and interflow, the drainage density, and the hydraulic conductivity of the uppermost aquifer. Additionally, five snowmelt specific parameters were adjusted for the entire catchment. Altogether, 37 parameters had to be calibrated. Additional a priori information (e.g. from flood hydrograph analysis) narrowed the parameter space of the solutions and improved the non-uniqueness of the fitted values. A reasonable quality of fit was achieved. Discrepancies between modelled and observed runoff were also due to the small number of meteorological stations and corresponding interpolation artefacts in the orographically complex terrain. Application of a 2-dimensional numerical
MODELING THE METALLICITY DISTRIBUTION OF GLOBULAR CLUSTERS
Muratov, Alexander L.; Gnedin, Oleg Y. E-mail: ognedin@umich.ed
2010-08-01
Observed metallicities of globular clusters reflect physical conditions in the interstellar medium of their high-redshift host galaxies. Globular cluster systems in most large galaxies display bimodal color and metallicity distributions, which are often interpreted as indicating two distinct modes of cluster formation. The metal-rich and metal-poor clusters have systematically different locations and kinematics in their host galaxies. However, the red and blue clusters have similar internal properties, such as their masses, sizes, and ages. It is therefore interesting to explore whether both metal-rich and metal-poor clusters could form by a common mechanism and still be consistent with the bimodal distribution. We present such a model, which prescribes the formation of globular clusters semi-analytically using galaxy assembly history from cosmological simulations coupled with observed scaling relations for the amount and metallicity of cold gas available for star formation. We assume that massive star clusters form only during mergers of massive gas-rich galaxies and tune the model parameters to reproduce the observed distribution in the Galaxy. A wide, but not the entire, range of model realizations produces metallicity distributions consistent with the data. We find that early mergers of smaller hosts create exclusively blue clusters, whereas subsequent mergers of more massive galaxies create both red and blue clusters. Thus, bimodality arises naturally as the result of a small number of late massive merger events. This conclusion is not significantly affected by the large uncertainties in our knowledge of the stellar mass and cold gas mass in high-redshift galaxies. The fraction of galactic stellar mass locked in globular clusters declines from over 10% at z > 3 to 0.1% at present.
Harvey, R. W.; Petrov, Yu. V.
2013-12-03
Within the US Department of Energy/Office of Fusion Energy magnetic fusion research program, there is an important whole-plasma-modeling need for a radio-frequency/neutral-beam-injection (RF/NBI) transport-oriented finite-difference Fokker-Planck (FP) code with combined capabilities for 4D (2R2V) geometry near the fusion plasma periphery, and computationally less demanding 3D (1R2V) bounce-averaged capabilities for plasma in the core of fusion devices. Demonstration of proof-of-principle achievement of this goal has been carried out in research carried out under Phase I of the SBIR award. Two DOE-sponsored codes, the CQL3D bounce-average Fokker-Planck code in which CompX has specialized, and the COGENT 4D, plasma edge-oriented Fokker-Planck code which has been constructed by Lawrence Livermore National Laboratory and Lawrence Berkeley Laboratory scientists, where coupled. Coupling was achieved by using CQL3D calculated velocity distributions including an energetic tail resulting from NBI, as boundary conditions for the COGENT code over the two-dimensional velocity space on a spatial interface (flux) surface at a given radius near the plasma periphery. The finite-orbit-width fast ions from the CQL3D distributions penetrated into the peripheral plasma modeled by the COGENT code. This combined code demonstrates the feasibility of the proposed 3D/4D code. By combining these codes, the greatest computational efficiency is achieved subject to present modeling needs in toroidally symmetric magnetic fusion devices. The more efficient 3D code can be used in its regions of applicability, coupled to the more computationally demanding 4D code in higher collisionality edge plasma regions where that extended capability is necessary for accurate representation of the plasma. More efficient code leads to greater use and utility of the model. An ancillary aim of the project is to make the combined 3D/4D code user friendly. Achievement of full-coupling of these two Fokker
Comparison between fully distributed model and semi-distributed model in urban hydrology modeling
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Giangola-Murzyn, Agathe; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe
2013-04-01
Water management in urban areas is becoming more and more complex, especially because of a rapid increase of impervious areas. There will also possibly be an increase of extreme precipitation due to climate change. The aims of the devices implemented to handle the large amount of water generate by urban areas such as storm water retention basins are usually twofold: ensure pluvial flood protection and water depollution. These two aims imply opposite management strategies. To optimize the use of these devices there is a need to implement urban hydrological models and improve fine-scale rainfall estimation, which is the most significant input. In this paper we suggest to compare two models and their sensitivity to small-scale rainfall variability on a 2.15 km2 urban area located in the County of Val-de-Marne (South-East of Paris, France). The average impervious coefficient is approximately 34%. In this work two types of models are used. The first one is CANOE which is semi-distributed. Such models are widely used by practitioners for urban hydrology modeling and urban water management. Indeed, they are easily configurable and the computation time is reduced, but these models do not take fully into account either the variability of the physical properties or the variability of the precipitations. An alternative is to use distributed models that are harder to configure and require a greater computation time, but they enable a deeper analysis (especially at small scales and upstream) of the processes at stake. We used the Multi-Hydro fully distributed model developed at the Ecole des Ponts ParisTech. It is an interacting core between open source software packages, each of them representing a portion of the water cycle in urban environment. Four heavy rainfall events that occurred between 2009 and 2011 are analyzed. The data comes from the Météo-France radar mosaic and the resolution is 1 km in space and 5 min in time. The closest radar of the Météo-France network is
Pseudoabsence Generation Strategies for Species Distribution Models
Hanberry, Brice B.; He, Hong S.; Palik, Brian J.
2012-01-01
Background Species distribution models require selection of species, study extent and spatial unit, statistical methods, variables, and assessment metrics. If absence data are not available, another important consideration is pseudoabsence generation. Different strategies for pseudoabsence generation can produce varying spatial representation of species. Methodology We considered model outcomes from four different strategies for generating pseudoabsences. We generating pseudoabsences randomly by 1) selection from the entire study extent, 2) a two-step process of selection first from the entire study extent, followed by selection for pseudoabsences from areas with predicted probability <25%, 3) selection from plots surveyed without detection of species presence, 4) a two-step process of selection first for pseudoabsences from plots surveyed without detection of species presence, followed by selection for pseudoabsences from the areas with predicted probability <25%. We used Random Forests as our statistical method and sixteen predictor variables to model tree species with at least 150 records from Forest Inventory and Analysis surveys in the Laurentian Mixed Forest province of Minnesota. Conclusions Pseudoabsence generation strategy completely affected the area predicted as present for species distribution models and may be one of the most influential determinants of models. All the pseudoabsence strategies produced mean AUC values of at least 0.87. More importantly than accuracy metrics, the two-step strategies over-predicted species presence, due to too much environmental distance between the pseudoabsences and recorded presences, whereas models based on random pseudoabsences under-predicted species presence, due to too little environmental distance between the pseudoabsences and recorded presences. Models using pseudoabsences from surveyed plots produced a balance between areas with high and low predicted probabilities and the strongest relationship between
SPEEDES for distributed information enterprise modeling
NASA Astrophysics Data System (ADS)
Hanna, James P.; Hillman, Robert G.
2002-07-01
The Air Force is developing a Distributed Information Enterprise Modeling and Simulation (DIEMS) framework under sponsorship of the High Performance Computer Modernization Office Common High Performance Computing Software Support Initiative (HPCMO/CHSSI). The DIEMS framework provides a design analysis environment for deployable distributed information management systems. DIEMS establishes the necessary analysis capability allowing developers to identify and mitigate programmatic risk early within the development cycle to allow successful deployment of the associated systems. The enterprise-modeling framework builds upon the Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES) foundation. This simulation framework will utilize 'Challenge Problem' class resources to address more than five million information objects and hundreds of thousands of clients comprising the future information based force structure. The simulation framework will be capable of assessing deployment aspects such as security, quality of service, and fault tolerance. SPEEDES provides an ideal foundation to support simulation of distributed information systems on a multiprocessor platform. SPEEDES allows the simulation builder to perform optimistic parallel processing on high performance computers, networks of workstations, or combinations of networked computers and HPC platforms.
A predictive transport modeling code for ICRF-heated tokamaks
Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )
1992-02-01
In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.
A conceptual, distributed snow redistribution model
NASA Astrophysics Data System (ADS)
Frey, S.; Holzmann, H.
2015-11-01
When applying conceptual hydrological models using a temperature index approach for snowmelt to high alpine areas often accumulation of snow during several years can be observed. Some of the reasons why these "snow towers" do not exist in nature are vertical and lateral transport processes. While snow transport models have been developed using grid cell sizes of tens to hundreds of square metres and have been applied in several catchments, no model exists using coarser cell sizes of 1 km2, which is a common resolution for meso- and large-scale hydrologic modelling (hundreds to thousands of square kilometres). In this paper we present an approach that uses only gravity and snow density as a proxy for the age of the snow cover and land-use information to redistribute snow in alpine basins. The results are based on the hydrological modelling of the Austrian Inn Basin in Tyrol, Austria, more specifically the Ötztaler Ache catchment, but the findings hold for other tributaries of the river Inn. This transport model is implemented in the distributed rainfall-runoff model COSERO (Continuous Semi-distributed Runoff). The results of both model concepts with and without consideration of lateral snow redistribution are compared against observed discharge and snow-covered areas derived from MODIS satellite images. By means of the snow redistribution concept, snow accumulation over several years can be prevented and the snow depletion curve compared with MODIS (Moderate Resolution Imaging Spectroradiometer) data could be improved, too. In a 7-year period the standard model would lead to snow accumulation of approximately 2900 mm SWE (snow water equivalent) in high elevated regions whereas the updated version of the model does not show accumulation and does also predict discharge with more accuracy leading to a Kling-Gupta efficiency of 0.93 instead of 0.9. A further improvement can be shown in the comparison of MODIS snow cover data and the calculated depletion curve, where
Modeling distributed systems with logic programming languages
Lenders, P.M.
1985-01-01
This thesis proposes new concepts for an ideal integrated specification and simulation workstation. The transition model approach to distributed systems specification is improved by the introduction of communicating finite state automata (CFSA), and a Prolog implementation of CFSA. Liveness and safety properties are proved with Prolog. Bidirectional input-output (bi-io), a new input-output mechanism is introduced, which eases distributed systems programming. It generalizes regular input-output mechanisms, replacing two concepts with one single concept. Moreover, it is concise and powerful, and for some applications suppresses deadlock problems. Bi-io is proposed as an extension of Communicating Sequential Processes (CSP). An axiomatic semantics of the extended CSP language is given, which follows the weakest precondition approach. The similarities between CFSA and CSP (with its weakest precondition semantics) suggest that the two descriptive methods should be used together with the ideal specification and simulation workstation.
New model for nucleon generalized parton distributions
Radyushkin, Anatoly V.
2014-01-01
We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.
Recent developments in DYNSUB: New models, code optimization and parallelization
Daeubler, M.; Trost, N.; Jimenez, J.; Sanchez, V.
2013-07-01
DYNSUB is a high-fidelity coupled code system consisting of the reactor simulator DYN3D and the sub-channel code SUBCHANFLOW. It describes nuclear reactor core behavior with pin-by-pin resolution for both steady-state and transient scenarios. In the course of the coupled code system's active development, super-homogenization (SPH) and generalized equivalence theory (GET) discontinuity factors may be computed with and employed in DYNSUB to compensate pin level homogenization errors. Because of the largely increased numerical problem size for pin-by-pin simulations, DYNSUB has bene fitted from HPC techniques to improve its numerical performance. DYNSUB's coupling scheme has been structurally revised. Computational bottlenecks have been identified and parallelized for shared memory systems using OpenMP. Comparing the elapsed time for simulating a PWR core with one-eighth symmetry under hot zero power conditions applying the original and the optimized DYNSUB using 8 cores, overall speed up factors greater than 10 have been observed. The corresponding reduction in execution time enables a routine application of DYNSUB to study pin level safety parameters for engineering sized cases in a scientific environment. (authors)
Numerical modeling approach of sinkhole propagation using the eXtended FEM code 'roxol'
NASA Astrophysics Data System (ADS)
Schneider-Löbens, Christiane; Wuttke, Manfred W.; Backers, Tobias; Krawczyk, Charlotte
2015-04-01
Subrosion and underground cavities lead to instability of the earth's surface. To minimize sinkhole hazard, it is necessary to have a better understanding of the processes and collapse mechanisms. Recent cases of subrosion in Germany that result in collapse structures (sinkholes) are used as a basis for this study. The aim is to simulate the collapse mechanism in order to specify the conditions in which sinkholes form. Using the XFEM code `roxol` (geomecon GmbH), it is possible to localize zones, in which rock failure occurs. Initiation of fracture propagation and interaction within these zones can be simulated. As a first approximation, we use a 2D model with simplified excavation and fault geometry and assume linear elastic, impermeable and non-poroelastic material behavior for the overburden layers; local stress field parameters are supplied by boundary conditions. We estimate the distribution of stress and strain in areas with critical loads to simulate failure under the influence of the stress field, material properties, as well as fault and joint geometry. Varying these parameters allows the calculation of the critical loads in which fractures propagate and failure occurs. The XFEM code `roxol` is a suitable approach to simulate the development of sinkholes. In this study, fracture propagation, as well as the interaction between existing joints are the most important parameters. Therefore, our first approach will be extended by local input parameters to develop predictions of time-dependent rock failure.
VISA-II sensitivity study of code calculations: Input and analytical model parameters
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.; Liebetrau, A.M.
1986-11-01
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less. Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.
Stark effect modeling in the detailed opacity code SCO-RCG
NASA Astrophysics Data System (ADS)
Pain, J.-C.; Gilleron, F.; Gilles, D.
2016-05-01
The broadening of lines by Stark effect is an important tool for inferring electron density and temperature in plasmas. Stark-effect calculations often rely on atomic data (transition rates, energy levels,...) not always exhaustive and/or valid for isolated atoms. We present a recent development in the detailed opacity code SCO-RCG for K-shell spectroscopy (hydrogen- and helium-like ions). This approach is adapted from the work of Gilles and Peyrusse. Neglecting non-diagonal terms in dipolar and collision operators, the line profile is expressed as a sum of Voigt functions associated to the Stark components. The formalism relies on the use of parabolic coordinates within SO(4) symmetry. The relativistic fine-structure of Lyman lines is included by diagonalizing the hamiltonian matrix associated to quantum states having the same principal quantum number n. The resulting code enables one to investigate plasma environment effects, the impact of the microfield distribution, the decoupling between electron and ion temperatures and the role of satellite lines (such as Li-like 1snℓn'ℓ' — 1s 2 nℓ, Be-like, etc.). Comparisons with simpler and widely-used semi-empirical models are presented.
Modeling of a-particle redistribution by sawteeth in TFTR using FPPT code
Gorelenkov, N.N.; Budny, R.V.; Duong, H.H.
1996-06-01
Results from recent DT experiments on TFTR to measure the radial density profiles of fast confined well trapped {alpha}-particles using the Pellet Charge eXchange (PCX) diagnostic [PETROV M. P., et. al. Nucl. Fusion, 35 (1995) 1437] indicate that sawtooth oscillations produce a significant broadening of the trapped alpha radial density profiles. ` Conventional models consistent with measured sawtooth effects on passing particles do not provide satisfactory simulations of the trapped particle mixing measured by PCX diagnostic. We propose a different mechanism for fast particle mixing during the sawtooth crash to explain the trapped {alpha}-particle density profile broadening after the crash. The model is based on the fast particle orbit averaged toroidal drift in a perturbed helical electric field with an adjustable absolute value. Such a drift of the fast particles results in a change of their energy and a redistribution in phase space. The energy redistribution is shown to obey the diffusion equation, while the redistribution in toroidal momentum P{var_phi} (or in minor radius) is assumed stochastic with large diffusion coefficient and was taken flat. The distribution function in a pre- sawtooth plasma and its evolution in a post-sawtooth crash plasma is simulated using the Fokker-Planck Post-TRANSP (FPPT) processor code. It is shown that FPPT calculated {alpha}-particle distributions are consistent with TRANSP Monte-Carlo calculations. Comparison of FPPT results with Pellet Char eXchange (PCX) measurements shows good agreement for 9 both sawtooth free and sawtoothing plasmas.
Joshua J. Cogliati; Abderrafi M. Ougouag
2006-10-01
A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.
Stochastic model of homogeneous coding and latent periodicity in DNA sequences.
Chaley, Maria; Kutyrkin, Vladimir
2016-02-01
The concept of latent triplet periodicity in coding DNA sequences which has been earlier extensively discussed is confirmed in the result of analysis of a number of eukaryotic genomes, where latent periodicity of a new type, called profile periodicity, is recognized in the CDSs. Original model of Stochastic Homogeneous Organization of Coding (SHOC-model) in textual string is proposed. This model explains the existence of latent profile periodicity and regularity in DNA sequences. PMID:26656186
Addressing Hate Speech and Hate Behaviors in Codes of Conduct: A Model for Public Institutions.
ERIC Educational Resources Information Center
Neiger, Jan Alan; Palmer, Carolyn; Penney, Sophie; Gehring, Donald D.
1998-01-01
As part of a larger study, researchers collected campus codes prohibiting hate crimes, which were then reviewed to determine whether the codes presented constitutional problems. Based on this review, the authors develop and present a model policy that is content neutral and does not use language that could be viewed as unconstitutionally vague or…
ERIC Educational Resources Information Center
Blozis, Shelley A.; Cho, Young Il
2008-01-01
The coding of time in latent curve models has been shown to have important implications in the interpretation of growth parameters. Centering time is often done to improve interpretation but may have consequences for estimated parameters. This article studies the effects of coding and centering time when there is interindividual heterogeneity in…
Code System for Calculating Ion Track Condensed Collision Model.
1997-05-21
Version 00 ICOM calculates the transport characteristics of ion radiation for applicaton to radiation protection, dosimetry and microdosimetry, and radiation physics of solids. Ions in the range Z=1-92 are handled. The energy range for protons is 0.001-10,000 MeV. For other ions the energy range is 0.001-100MeV/nucleon. Computed quantities include stopping powers, ranges; spatial, angular and energy distributions of particle current and fluence; spatial distributions of the absorbed dose; and spatial distributions of thermalized ions.
Inverse distributed hydrological modelling of alpine catchments
NASA Astrophysics Data System (ADS)
Kunstmann, H.; Krause, J.; Mayr, S.
2005-12-01
Even in physically based distributed hydrological models, various remaining parameters must be estimated for each sub-catchment. This can involve tremendous effort, especially when the number of sub-catchments is large and the applied hydrological model is computationally expensive. Automatic parameter estimation tools can significantly facilitate the calibration process. Hence, we combined the nonlinear parameter estimation tool PEST with the distributed hydrological model WaSiM. PEST is based on the Gauss-Marquardt-Levenberg method, a gradient-based nonlinear parameter estimation algorithm. WaSiM is a fully distributed hydrological model using physically based algorithms for most of the process descriptions. WaSiM was applied to the alpine/prealpine Ammer River catchment (southern Germany, 710 km2) in a 100×100 m2 horizontal resolution. The catchment is heterogeneous in terms of geology, pedology and land use and shows a complex orography (the difference of elevation is around 1600 m). Using the developed PEST-WaSiM interface, the hydrological model was calibrated by comparing simulated and observed runoff at eight gauges for the hydrologic year 1997 and validated for the hydrologic year 1993. For each sub-catchment four parameters had to be calibrated: the recession constants of direct runoff and interflow, the drainage density, and the hydraulic conductivity of the uppermost aquifer. Additionally, five snowmelt specific parameters were adjusted for the entire catchment. Altogether, 37 parameters had to be calibrated. Additional a priori information (e.g. from flood hydrograph analysis) narrowed the parameter space of the solutions and improved the non-uniqueness of the fitted values. A reasonable quality of fit was achieved. Discrepancies between modelled and observed runoff were also due to the small number of meteorological stations and corresponding interpolation artefacts in the orographically complex terrain. A detailed covariance analysis was performed
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Subgroup A : nuclear model codes report to the Sixteenth Meeting of the WPEC
Talou, P.; Chadwick, M. B.; Dietrich, F. S.; Herman, M.; Kawano, T.; Konig, A.; Obložinský, P.
2004-01-01
The Subgroup A activities focus on the development of nuclear reaction models and codes, used in evaluation work for nuclear reactions from the unresolved energy region up to the pion threshold production limit, and for target nuclides from the low teens and heavier. Much of the efforts are devoted by each participant to the continuing development of their own Institution codes. Progresses in this arena are reported in detail for each code in the present document. EMPIRE-II is of public access. The release of the TALYS code has been announced for the ND2004 Conference in Santa Fe, NM, October 2004. McGNASH is still under development and is not expected to be released in the very near future. In addition, Subgroup A members have demonstrated a growing interest in working on common modeling and codes capabilities, which would significantly reduce the amount of duplicate work, help manage efficiently the growing lines of existing codes, and render codes inter-comparison much easier. A recent and important activity of the Subgroup A has therefore been to develop the framework and the first bricks of the ModLib library, which is constituted of mostly independent pieces of codes written in Fortran 90 (and above) to be used in existing and future nuclear reaction codes. Significant progresses in the development of ModLib have been made during the past year. Several physics modules have been added to the library, and a few more have been planned in detail for the coming year.
Applications of species distribution modeling to paleobiology
NASA Astrophysics Data System (ADS)
Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine A.; Nógues-Bravo, David; Normand, Signe
2011-10-01
Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview of applications of SDM to paleobiology, outlining the methodology, reviewing SDM-based studies to paleobiology or at the interface of paleo- and neobiology, discussing assumptions and uncertainties as well as how to handle them, and providing a synthesis and outlook. Key methodological issues for SDM applications to paleobiology include predictor variables (types and properties; special emphasis is given to paleoclimate), model validation (particularly important given the emphasis on cross-temporal predictions in paleobiological applications), and the integration of SDM and genetics approaches. Over the last few years the number of studies using SDM to address paleobiology-related questions has increased considerably. While some of these studies only use SDM (23%), most combine them with genetically inferred patterns (49%), paleoecological records (22%), or both (6%). A large number of SDM-based studies have addressed the role of Pleistocene glacial refugia in biogeography and evolution, especially in Europe, but also in many other regions. SDM-based approaches are also beginning to contribute to a suite of other research questions, such as historical constraints on current distributions and diversity patterns, the end-Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing
DANA: distributed numerical and adaptive modelling framework.
Rougier, Nicolas P; Fix, Jérémy
2012-01-01
DANA is a python framework ( http://dana.loria.fr ) whose computational paradigm is grounded on the notion of a unit that is essentially a set of time dependent values varying under the influence of other units via adaptive weighted connections. The evolution of a unit's value are defined by a set of differential equations expressed in standard mathematical notation which greatly ease their definition. The units are organized into groups that form a model. Each unit can be connected to any other unit (including itself) using a weighted connection. The DANA framework offers a set of core objects needed to design and run such models. The modeler only has to define the equations of a unit as well as the equations governing the training of the connections. The simulation is completely transparent to the modeler and is handled by DANA. This allows DANA to be used for a wide range of numerical and distributed models as long as they fit the proposed framework (e.g. cellular automata, reaction-diffusion system, decentralized neural networks, recurrent neural networks, kernel-based image processing, etc.). PMID:22994650
Purifying selection shapes the coincident SNP distribution of primate coding sequences.
Chen, Chia-Ying; Hung, Li-Yuan; Wu, Chan-Shuo; Chuang, Trees-Juen
2016-01-01
Genome-wide analysis has observed an excess of coincident single nucleotide polymorphisms (coSNPs) at human-chimpanzee orthologous positions, and suggested that this is due to cryptic variation in the mutation rate. While this phenomenon primarily corresponds with non-coding coSNPs, the situation in coding sequences remains unclear. Here we calculate the observed-to-expected ratio of coSNPs (coSNPO/E) to estimate the prevalence of human-chimpanzee coSNPs, and show that the excess of coSNPs is also present in coding regions. Intriguingly, coSNPO/E is much higher at zero-fold than at nonzero-fold degenerate sites; such a difference is due to an elevation of coSNPO/E at zero-fold degenerate sites, rather than a reduction at nonzero-fold degenerate ones. These trends are independent of chimpanzee subpopulation, population size, or sequencing techniques; and hold in broad generality across primates. We find that this discrepancy cannot fully explained by sequence contexts, shared ancestral polymorphisms, SNP density, and recombination rate, and that coSNPO/E in coding sequences is significantly influenced by purifying selection. We also show that selection and mutation rate affect coSNPO/E independently, and coSNPs tend to be less damaging and more correlated with human diseases than non-coSNPs. These suggest that coSNPs may represent a "signature" during primate protein evolution. PMID:27255481
Purifying selection shapes the coincident SNP distribution of primate coding sequences
Chen, Chia-Ying; Hung, Li-Yuan; Wu, Chan-Shuo; Chuang, Trees-Juen
2016-01-01
Genome-wide analysis has observed an excess of coincident single nucleotide polymorphisms (coSNPs) at human-chimpanzee orthologous positions, and suggested that this is due to cryptic variation in the mutation rate. While this phenomenon primarily corresponds with non-coding coSNPs, the situation in coding sequences remains unclear. Here we calculate the observed-to-expected ratio of coSNPs (coSNPO/E) to estimate the prevalence of human-chimpanzee coSNPs, and show that the excess of coSNPs is also present in coding regions. Intriguingly, coSNPO/E is much higher at zero-fold than at nonzero-fold degenerate sites; such a difference is due to an elevation of coSNPO/E at zero-fold degenerate sites, rather than a reduction at nonzero-fold degenerate ones. These trends are independent of chimpanzee subpopulation, population size, or sequencing techniques; and hold in broad generality across primates. We find that this discrepancy cannot fully explained by sequence contexts, shared ancestral polymorphisms, SNP density, and recombination rate, and that coSNPO/E in coding sequences is significantly influenced by purifying selection. We also show that selection and mutation rate affect coSNPO/E independently, and coSNPs tend to be less damaging and more correlated with human diseases than non-coSNPs. These suggest that coSNPs may represent a “signature” during primate protein evolution. PMID:27255481
Surveying Multidisciplinary Aspects in Real-Time Distributed Coding for Wireless Sensor Networks
Braccini, Carlo; Davoli, Franco; Marchese, Mario; Mongelli, Maurizio
2015-01-01
Wireless Sensor Networks (WSNs), where a multiplicity of sensors observe a physical phenomenon and transmit their measurements to one or more sinks, pertain to the class of multi-terminal source and channel coding problems of Information Theory. In this category, “real-time” coding is often encountered for WSNs, referring to the problem of finding the minimum distortion (according to a given measure), under transmission power constraints, attainable by encoding and decoding functions, with stringent limits on delay and complexity. On the other hand, the Decision Theory approach seeks to determine the optimal coding/decoding strategies or some of their structural properties. Since encoder(s) and decoder(s) possess different information, though sharing a common goal, the setting here is that of Team Decision Theory. A more pragmatic vision rooted in Signal Processing consists of fixing the form of the coding strategies (e.g., to linear functions) and, consequently, finding the corresponding optimal decoding strategies and the achievable distortion, generally by applying parametric optimization techniques. All approaches have a long history of past investigations and recent results. The goal of the present paper is to provide the taxonomy of the various formulations, a survey of the vast related literature, examples from the authors' own research, and some highlights on the inter-play of the different theories. PMID:25633597
Sparse distributed memory and related models
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1992-01-01
Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.
How can model comparison help improving species distribution models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779
Modeling Distributed Electricity Generation in the NEMS Buildings Models
2011-01-01
This paper presents the modeling methodology, projected market penetration, and impact of distributed generation with respect to offsetting future electricity needs and carbon dioxide emissions in the residential and commercial buildings sector in the Annual Energy Outlook 2000 (AEO2000) reference case.
Xu, Jinhua; Yang, Zhiyong; Tsien, Joe Z.
2010-01-01
Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs) of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA) and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes. PMID:21209963
Higher-order ionosphere modeling for CODE's next reprocessing activities
NASA Astrophysics Data System (ADS)
Lutz, S.; Schaer, S.; Meindl, M.; Dach, R.; Steigenberger, P.
2009-12-01
CODE (the Center for Orbit Determination in Europe) is a joint venture between the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Agency for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Institut für Astronomische und Phsyikalische Geodäsie of the Technische Universität München (IAPG/TUM, Munich, Germany). It acts as one of the global analysis centers of the International GNSS Service (IGS) and participates in the first IGS reprocessing campaign, a full reanalysis of GPS data collected since 1994. For a future reanalyis of the IGS data it is planned to consider not only first-order but also higher-order ionosphere terms in the space geodetic observations. There are several works (e.g. Fritsche et al. 2005), which showed a significant and systematic influence of these effects on the analysis results. The development version of the Bernese Software used at CODE is expanded by the ability to assign additional (scaling) parameters to each considered higher-order ionosphere term. By this, each correction term can be switched on and off on normal-equation level and, moreover, the significance of each correction term may be verified on observation level for different ionosphere conditions.
A model for non-monotonic intensity coding
Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V. M.; Yarali, Ayse
2015-01-01
Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus–response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666
A model for non-monotonic intensity coding.
Nehrkorn, Johannes; Tanimoto, Hiromu; Herz, Andreas V M; Yarali, Ayse
2015-05-01
Peripheral neurons of most sensory systems increase their response with increasing stimulus intensity. Behavioural responses, however, can be specific to some intermediate intensity level whose particular value might be innate or associatively learned. Learning such a preference requires an adjustable trans- formation from a monotonic stimulus representation at the sensory periphery to a non-monotonic representation for the motor command. How do neural systems accomplish this task? We tackle this general question focusing on odour-intensity learning in the fruit fly, whose first- and second-order olfactory neurons show monotonic stimulus-response curves. Nevertheless, flies form associative memories specific to particular trained odour intensities. Thus, downstream of the first two olfactory processing layers, odour intensity must be re-coded to enable intensity-specific associative learning. We present a minimal, feed-forward, three-layer circuit, which implements the required transformation by combining excitation, inhibition, and, as a decisive third element, homeostatic plasticity. Key features of this circuit motif are consistent with the known architecture and physiology of the fly olfactory system, whereas alternative mechanisms are either not composed of simple, scalable building blocks or not compatible with physiological observations. The simplicity of the circuit and the robustness of its function under parameter changes make this computational motif an attractive candidate for tuneable non-monotonic intensity coding. PMID:26064666
Riordan, Brian; Jones, Michael N
2011-04-01
Since their inception, distributional models of semantics have been criticized as inadequate cognitive theories of human semantic learning and representation. A principal challenge is that the representations derived by distributional models are purely symbolic and are not grounded in perception and action; this challenge has led many to favor feature-based models of semantic representation. We argue that the amount of perceptual and other semantic information that can be learned from purely distributional statistics has been underappreciated. We compare the representations of three feature-based and nine distributional models using a semantic clustering task. Several distributional models demonstrated semantic clustering comparable with clustering-based on feature-based representations. Furthermore, when trained on child-directed speech, the same distributional models perform as well as sensorimotor-based feature representations of children's lexical semantic knowledge. These results suggest that, to a large extent, information relevant for extracting semantic categories is redundantly coded in perceptual and linguistic experience. Detailed analyses of the semantic clusters of the feature-based and distributional models also reveal that the models make use of complementary cues to semantic organization from the two data streams. Rather than conceptualizing feature-based and distributional models as competing theories, we argue that future focus should be on understanding the cognitive mechanisms humans use to integrate the two sources. PMID:25164298
Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems
Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.
1993-12-01
Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.
Field-based tests of geochemical modeling codes using New Zealand hydrothermal systems
Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.
1994-06-01
Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions.
Lumpy - an interactive Lumped Parameter Modeling code based on MS Access and MS Excel.
NASA Astrophysics Data System (ADS)
Suckow, A.
2012-04-01
Several tracers for dating groundwater (18O/2H, 3H, CFCs, SF6, 85Kr) need lumped parameter modeling (LPM) to convert measured values into numbers with unit time. Other tracers (T/3He, 39Ar, 14C, 81Kr) allow the computation of apparent ages with a mathematical formula using radioactive decay without defining the age mixture that any groundwater sample represents. Also interpretation of the latter profits significantly from LPM tools that allow forward modeling of input time series to measurable output values assuming different age distributions and mixtures in the sample. This talk presents a Lumped Parameter Modeling code, Lumpy, combining up to two LPMs in parallel. The code is standalone and freeware. It is based on MS Access and Access Basic (AB) and allows using any number of measurements for both input time series and output measurements, with any, not necessarily constant, time resolution. Several tracers, also comprising very different timescales like e.g. the combination of 18O, CFCs and 14C, can be modeled, displayed and fitted simultaneously. Lumpy allows for each of the two parallel models the choice of the following age distributions: Exponential Piston flow Model (EPM), Linear Piston flow Model (LPM), Dispersion Model (DM), Piston flow Model (PM) and Gamma Model (GM). Concerning input functions, Lumpy allows delaying (passage through the unsaturated zone) shifting by a constant value (converting 18O data from a GNIP station to a different altitude), multiplying by a constant value (geochemical reduction of initial 14C) and the definition of a constant input value prior to the input time series (pre-bomb tritium). Lumpy also allows underground tracer production (4He or 39Ar) and the computation of a daughter product (tritiugenic 3He) as well as partial loss of the daughter product (partial re-equilibration of 3He). These additional parameters and the input functions can be defined independently for the two sub-LPMs to represent two different recharge
Modeling of Ionization Physics with the PIC Code OSIRIS
Deng, S.; Tsung, F.; Lee, S.; Lu, W.; Mori, W.B.; Katsouleas, T.; Muggli, P.; Blue, B.E.; Clayton, C.E.; O'Connell, C.; Dodd, E.; Decker, F.J.; Huang, C.; Hogan, M.J.; Hemker, R.; Iverson, R.H.; Joshi, C.; Ren, C.; Raimondi, P.; Wang, S.; Walz, D.; /Southern California U. /UCLA /SLAC
2005-09-27
When considering intense particle or laser beams propagating in dense plasma or gas, ionization plays an important role. Impact ionization and tunnel ionization may create new plasma electrons, altering the physics of wakefield accelerators, causing blue shifts in laser spectra, creating and modifying instabilities, etc. Here we describe the addition of an impact ionization package into the 3-D, object-oriented, fully parallel PIC code OSIRIS. We apply the simulation tool to simulate the parameters of the upcoming E164 Plasma Wakefield Accelerator experiment at the Stanford Linear Accelerator Center (SLAC). We find that impact ionization is dominated by the plasma electrons moving in the wake rather than the 30 GeV drive beam electrons. Impact ionization leads to a significant number of trapped electrons accelerated from rest in the wake.
An Advanced simulation Code for Modeling Inductive Output Tubes
Thuc Bui; R. Lawrence Ives
2012-04-27
During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing current density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.
FREYA-a new Monte Carlo code for improved modeling of fission chains
Hagmann, C A; Randrup, J; Vogt, R L
2012-06-12
A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.
Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2
NASA Technical Reports Server (NTRS)
Daley, P. L.; Owens, S. F.
1988-01-01
A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.
Transfer function modeling of damping mechanisms in distributed parameter models
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
1993-11-01
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
A Distributed Snow Evolution Modeling System (SnowModel)
NASA Astrophysics Data System (ADS)
Liston, G. E.; Elder, K.
2004-12-01
A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.
NASA Technical Reports Server (NTRS)
Gouge, Michael F.
2011-01-01
Hypervelocity impact tests on test satellites are performed by members of the orbital debris scientific community in order to understand and typify the on-orbit collision breakup process. By analysis of these test satellite fragments, the fragment size and mass distributions are derived and incorporated into various orbital debris models. These same fragments are currently being put to new use using emerging technologies. Digital models of these fragments are created using a laser scanner. A group of computer programs referred to as the Fragment Rotation Analysis and Lightcurve code uses these digital representations in a multitude of ways that describe, measure, and model on-orbit fragments and fragment behavior. The Dynamic Rotation subroutine generates all of the possible reflected intensities from a scanned fragment as if it were observed to rotate dynamically while in orbit about the Earth. This calls an additional subroutine that graphically displays the intensities and the resulting frequency of those intensities as a range of solar phase angles in a Probability Density Function plot. This document reports the additions and modifications to the subset of the Fragment Rotation Analysis and Lightcurve concerned with the Dynamic Rotation and Probability Density Function plotting subroutines.
NASA Astrophysics Data System (ADS)
Wang, Yannian; Jiang, Zhuangde
2006-03-01
A new distributed optical fiber sensor system for long-distance oil pipeline leakage and external damage detection is presented. A smart and sensitive optical fiber cable is buried beneath the soil running along the oil pipeline, which is sensitive to soakage of oil products and mechanical deformation and vibration caused by leaking, tampering, and mechanical impacting. The region of additional attenuation can be located based on the optical time domain reflectometry (OTDR), and the types of external disturbances can be identified according to the characteristics of transmitted optical power. The Golay codes are utilized to improve the range-resolution performance of the OTDR sub-system and offer a method to characterize the transmitted optical power in a wide range of frequency spectrum. Theoretic analysis and simulation experiment have shown that the application of Golay codes can overcome the shortcomings of the prototype based on the conventional single-pulse OTDR.
Intercept Centering and Time Coding in Latent Difference Score Models
ERIC Educational Resources Information Center
Grimm, Kevin J.
2012-01-01
Latent difference score (LDS) models combine benefits derived from autoregressive and latent growth curve models allowing for time-dependent influences and systematic change. The specification and descriptions of LDS models include an initial level of ability or trait plus an accumulation of changes. A limitation of this specification is that the…
RELAP5/MOD3 code manual. Volume 4, Models and correlations
1995-08-01
The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations.
NASA Astrophysics Data System (ADS)
Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.
2016-02-01
A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.
Anisotropic distributions in a multiphase transport model
NASA Astrophysics Data System (ADS)
Zhou, You; Xiao, Kai; Feng, Zhao; Liu, Feng; Snellings, Raimond
2016-03-01
With a multiphase transport (AMPT) model we investigate the relation between the magnitude, fluctuations, and correlations of the initial state spatial anisotropy ɛn and the final state anisotropic flow coefficients vn in Au+Au collisions at √{s NN}=200 GeV. It is found that the relative eccentricity fluctuations in AMPT account for the observed elliptic flow fluctuations, both are in agreement with the elliptic flow fluctuation measurements from the STAR collaboration. In addition, the studies based on two- and multiparticle correlations and event-by-event distributions of the anisotropies suggest that the elliptic-power function is a promising candidate of the underlying probability density function of the event-by-event distributions of ɛn as well as vn. Furthermore, the correlations between different order symmetry planes and harmonics in the initial coordinate space and final state momentum space are presented. Nonzero values of these correlations have been observed. The comparison between our calculations and data will, in the future, shed new insight into the nature of the fluctuations of the quark-gluon plasma produced in heavy ion collisions.
Distributed Energy Resources Market Diffusion Model
Maribu, Karl Magnus; Firestone, Ryan; Marnay, Chris; Siddiqui,Afzal S.
2006-06-16
Distributed generation (DG) technologies, such as gas-fired reciprocating engines and microturbines, have been found to be economically beneficial in meeting commercial-sector electrical, heating, and cooling loads. Even though the electric-only efficiency of DG is lower than that offered by traditional central stations, combined heat and power (CHP) applications using recovered heat can make the overall system energy efficiency of distributed energy resources (DER) greater. From a policy perspective, however, it would be useful to have good estimates of penetration rates of DER under various economic and regulatory scenarios. In order to examine the extent to which DER systems may be adopted at a national level, we model the diffusion of DER in the US commercial building sector under different technical research and technology outreach scenarios. In this context, technology market diffusion is assumed to depend on the system's economic attractiveness and the developer's knowledge about the technology. The latter can be spread both by word-of-mouth and by public outreach programs. To account for regional differences in energy markets and climates, as well as the economic potential for different building types, optimal DER systems are found for several building types and regions. Technology diffusion is then predicted via two scenarios: a baseline scenario and a program scenario, in which more research improves DER performance and stronger technology outreach programs increase DER knowledge. The results depict a large and diverse market where both optimal installed capacity and profitability vary significantly across regions and building types. According to the technology diffusion model, the West region will take the lead in DER installations mainly due to high electricity prices, followed by a later adoption in the Northeast and Midwest regions. Since the DER market is in an early stage, both technology research and outreach programs have the potential to increase
LWR codes capability to address SFR BDBA scenarios: Modeling of the ABCOVE tests
Herranz, L. E.; Garcia, M.; Morandi, S.
2012-07-01
The sound background built-up in LWR source term analysis in case of a severe accident, make it worth to check the capability of LWR safety analysis codes to model accident SFR scenarios, at least in some areas. This paper gives a snapshot of such predictability in the area of aerosol behavior in containment. To do so, the AB-5 test of the ABCOVE program has been modeled with 3 LWR codes: ASTEC, ECART and MELCOR. Through the search of a best estimate scenario and its comparison to data, it is concluded that even in the specific case of in-containment aerosol behavior, some enhancements would be needed in the LWR codes and/or their application, particularly with respect to consideration of particle shape. Nonetheless, much of the modeling presently embodied in LWR codes might be applicable to SFR scenarios. These conclusions should be seen as preliminary as long as comparisons are not extended to more experimental scenarios. (authors)
Relativistic modeling capabilities in PERSEUS extended MHD simulation code for HED plasmas
Hamlin, Nathaniel D.; Seyler, Charles E.
2014-12-15
We discuss the incorporation of relativistic modeling capabilities into the PERSEUS extended MHD simulation code for high-energy-density (HED) plasmas, and present the latest hybrid X-pinch simulation results. The use of fully relativistic equations enables the model to remain self-consistent in simulations of such relativistic phenomena as X-pinches and laser-plasma interactions. By suitable formulation of the relativistic generalized Ohm’s law as an evolution equation, we have reduced the recovery of primitive variables, a major technical challenge in relativistic codes, to a straightforward algebraic computation. Our code recovers expected results in the non-relativistic limit, and reveals new physics in the modeling of electron beam acceleration following an X-pinch. Through the use of a relaxation scheme, relativistic PERSEUS is able to handle nine orders of magnitude in density variation, making it the first fluid code, to our knowledge, that can simulate relativistic HED plasmas.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Parallel Spectral Transform Shallow Water Model: A runtime-tunable parallel benchmark code
Worley, P.H.; Foster, I.T.
1994-05-01
Fairness is an important issue when benchmarking parallel computers using application codes. The best parallel algorithm on one platform may not be the best on another. While it is not feasible to reevaluate parallel algorithms and reimplement large codes whenever new machines become available, it is possible to embed algorithmic options into codes that allow them to be ``tuned`` for a paticular machine without requiring code modifications. In this paper, we describe a code in which such an approach was taken. PSTSWM was developed for evaluating parallel algorithms for the spectral transform method in atmospheric circulation models. Many levels of runtime-selectable algorithmic options are supported. We discuss these options and our evaluation methodology. We also provide empirical results from a number of parallel machines, indicating the importance of tuning for each platform before making a comparison.
A computer code for calculations in the algebraic collective model of the atomic nucleus
NASA Astrophysics Data System (ADS)
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Electrical Circuit Simulation Code
2001-08-09
Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.
Performance Measurement, Visualization and Modeling of Parallel and Distributed Programs
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Sarukkai, Sekhar R.; Mehra, Pankaj; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper presents a methodology for debugging the performance of message-passing programs on both tightly coupled and loosely coupled distributed-memory machines. The AIMS (Automated Instrumentation and Monitoring System) toolkit, a suite of software tools for measurement and analysis of performance, is introduced and its application illustrated using several benchmark programs drawn from the field of computational fluid dynamics. AIMS includes (i) Xinstrument, a powerful source-code instrumentor, which supports both Fortran77 and C as well as a number of different message-passing libraries including Intel's NX Thinking Machines' CMMD, and PVM; (ii) Monitor, a library of timestamping and trace -collection routines that run on supercomputers (such as Intel's iPSC/860, Delta, and Paragon and Thinking Machines' CM5) as well as on networks of workstations (including Convex Cluster and SparcStations connected by a LAN); (iii) Visualization Kernel, a trace-animation facility that supports source-code clickback, simultaneous visualization of computation and communication patterns, as well as analysis of data movements; (iv) Statistics Kernel, an advanced profiling facility, that associates a variety of performance data with various syntactic components of a parallel program; (v) Index Kernel, a diagnostic tool that helps pinpoint performance bottlenecks through the use of abstract indices; (vi) Modeling Kernel, a facility for automated modeling of message-passing programs that supports both simulation -based and analytical approaches to performance prediction and scalability analysis; (vii) Intrusion Compensator, a utility for recovering true performance from observed performance by removing the overheads of monitoring and their effects on the communication pattern of the program; and (viii) Compatibility Tools, that convert AIMS-generated traces into formats used by other performance-visualization tools, such as ParaGraph, Pablo, and certain AVS/Explorer modules.
A distributed clients/distributed servers model for STARCAT
NASA Technical Reports Server (NTRS)
Pirenne, B.; Albrecht, M. A.; Durand, D.; Gaudet, S.
1992-01-01
STARCAT, the Space Telescope ARchive and CATalogue user interface has been along for a number of years already. During this time it has been enhanced and augmented in a number of different fields. This time, we would like to dwell on a new capability allowing geographically distributed user interfaces to connect to geographically distributed data servers. This new concept permits users anywhere on the internet running STARCAT on their local hardware to access e.g., whichever of the 3 existing HST archive sites is available, or get information on the CFHT archive through a transparent connection to the CADC in BC or to get the La Silla weather by connecting to the ESO database in Munich during the same session. Similarly PreView (or quick look) images and spectra will also flow directly to the user from wherever it is available. Moving towards an 'X'-based STARCAT is another goal being pursued: a graphic/image server and a help/doc server are currently being added to it. They should further enhance the user independence and access transparency.
Code modernization and modularization of APEX and SWAT watershed simulation models
Technology Transfer Automated Retrieval System (TEKTRAN)
SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2012-01-01
This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.
1980-06-01
These recommended requirements include provisions for electrical, building, mechanical, and plumbing installations for active and passive solar energy systems used for space or process heating and cooling, and domestic water heating. The provisions in these recommended requirements are intended to be used in conjunction with the existing building codes in each jurisdiction. Where a solar relevant provision is adequately covered in an existing model code, the section is referenced in the Appendix. Where a provision has been drafted because there is no counterpart in the existing model code, it is found in the body of these recommended requirements. Commentaries are included in the text explaining the coverage and intent of present model code requirements and suggesting alternatives that may, at the discretion of the building official, be considered as providing reasonable protection to the public health and safety. Also included is an Appendix which is divided into a model code cross reference section and a reference standards section. The model code cross references are a compilation of the sections in the text and their equivalent requirements in the applicable model codes. (MHR)
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
Phonological coding during reading.
Leinenger, Mallorie
2014-11-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. PMID:25150679
Automated generation of uniform Group Technology part codes from solid model data
Ames, A.L.
1987-01-01
Group Technology is a manufacturing theory based on the identification of similar parts and the subsequent grouping of these parts to enhance the manufacturing process. Part classification and coding systems group parts into families based on design and manufacturing attributes. Traditionally, humans code parts by examining a blueprint of the part to find important features as defined in a set of part classification rules. This process can be difficult and time consuming due to the complexity of the classification system. Coding specifications can require considerable interpretation, making consistency a problem for organizations employing many (human) part coders. A solution to these problems is to automate the part coding process in software, using a CAD database as input. It is straightforward to translate the part classification rules into a rule based expert system. A more difficult task is the recognition of part coding features from a CAD database. Previous research in feature recognition has concentrated on material removal features (depressions such as holes, pockets and slots). Part classification requires the ability to recognize such features, plus other features such as hole patterns, symmetries and overall part shape. This paper extends feature recognition to include part classification and coding features and describes an expert system for automated part classification and coding being developed. This system accepts boundary-representation solid model data and generates a part code. Specific feature recognition problems (such as intersecting features) and the methods developed to solve these problems are presented.
Once-through CANDU reactor models for the ORIGEN2 computer code
Croff, A.G.; Bjerke, M.A.
1980-11-01
Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.
MUFITS Code for Modeling Geological Storage of Carbon Dioxide at Sub- and Supercritical Conditions
NASA Astrophysics Data System (ADS)
Afanasyev, A.
2012-12-01
Two-phase models are widely used for simulation of CO2 storage in saline aquifers. These models support gaseous phase mainly saturated with CO2 and liquid phase mainly saturated with H2O (e.g. TOUGH2 code). The models can be applied to analysis of CO2 storage only in relatively deeply-buried reservoirs where pressure exceeds CO2 critical pressure. At these supercritical reservoir conditions only one supercritical CO2-rich phase appears in aquifer due to CO2 injection. In shallow aquifers where reservoir pressure is less than the critical pressure CO2 can split in two different liquid-like and gas-like phases (e.g. Spycher et al., 2003). Thus a region of three-phase flow of water, liquid and gaseous CO2 can appear near the CO2 injection point. Today there is no widely used and generally accepted numerical model capable of the three-phase flows with two CO2-rich phases. In this work we propose a new hydrodynamic simulator MUFITS (Multiphase Filtration Transport Simulator) for multiphase compositional modeling of CO2-H2O mixture flows in porous media at conditions of interest for carbon sequestration. The simulator is effective both for supercritical flows in a wide range of pressure and temperature and for subcritical three-phase flows of water, liquid CO2 and gaseous CO2 in shallow reservoirs. The distinctive feature of the proposed code lies in the methodology for mixture properties determination. Transport equations and Darcy correlation are solved together with calculation of the entropy maximum that is reached in thermodynamic equilibrium and determines the mixture composition. To define and solve the problem only one function - mixture thermodynamic potential - is required. The potential is determined using a three-parametric generalization of Peng-Robinson equation of state fitted to experimental data (Todheide, Takenouchi, Altunin etc.). We apply MUFITS to simple 1D and 2D test problems of CO2 injection in shallow reservoirs subjected to phase changes between
On models of the genetic code generated by binary dichotomic algorithms.
Gumbel, Markus; Fimmel, Elena; Danielli, Alberto; Strüngmann, Lutz
2015-02-01
In this paper we introduce the concept of a BDA-generated model of the genetic code which is based on binary dichotomic algorithms (BDAs). A BDA-generated model is based on binary dichotomic algorithms (BDAs). Such a BDA partitions the set of 64 codons into two disjoint classes of size 32 each and provides a generalization of known partitions like the Rumer dichotomy. We investigate what partitions can be generated when a set of different BDAs is applied sequentially to the set of codons. The search revealed that these models are able to generate code tables with very different numbers of classes ranging from 2 to 64. We have analyzed whether there are models that map the codons to their amino acids. A perfect matching is not possible. However, we present models that describe the standard genetic code with only few errors. There are also models that map all 64 codons uniquely to 64 classes showing that BDAs can be used to identify codons precisely. This could serve as a basis for further mathematical analysis using coding theory, for example. The hypothesis that BDAs might reflect a molecular mechanism taking place in the decoding center of the ribosome is discussed. The scan demonstrated that binary dichotomic partitions are able to model different aspects of the genetic code very well. The search was performed with our tool Beady-A. This software is freely available at http://mi.informatik.hs-mannheim.de/beady-a. It requires a JVM version 6 or higher. PMID:25530514
A Dual Coding Theoretical Model of Decoding in Reading: Subsuming the LaBerge and Samuels Model
ERIC Educational Resources Information Center
Sadoski, Mark; McTigue, Erin M.; Paivio, Allan
2012-01-01
In this article we present a detailed Dual Coding Theory (DCT) model of decoding. The DCT model reinterprets and subsumes The LaBerge and Samuels (1974) model of the reading process which has served well to account for decoding behaviors and the processes that underlie them. However, the LaBerge and Samuels model has had little to say about…
Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code
NASA Astrophysics Data System (ADS)
He, Tongming Tony
In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based
Engine structures modeling software system: Computer code. User's manual
NASA Technical Reports Server (NTRS)
1992-01-01
ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.
Comparing the line broadened quasilinear model to Vlasov code
Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.
2014-03-15
The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.
Comparing the line broadened quasilinear model to Vlasov code
NASA Astrophysics Data System (ADS)
Ghantous, K.; Berk, H. L.; Gorelenkov, N. N.
2014-03-01
The Line Broadened Quasilinear (LBQ) model is revisited to study its predicted saturation level as compared with predictions of a Vlasov solver BOT [Lilley et al., Phys. Rev. Lett. 102, 195003 (2009) and M. Lilley, BOT Manual. The parametric dependencies of the model are modified to achieve more accuracy compared to the results of the Vlasov solver both in regards to a mode amplitude's time evolution to a saturated state and its final steady state amplitude in the parameter space of the model's applicability. However, the regions of stability as predicted by LBQ model and BOT are found to significantly differ from each other. The solutions of the BOT simulations are found to have a larger region of instability than the LBQ simulations.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
Time domain analysis of the weighted distributed order rheological model
NASA Astrophysics Data System (ADS)
Cao, Lili; Pu, Hai; Li, Yan; Li, Ming
2016-05-01
This paper presents the fundamental solution and relevant properties of the weighted distributed order rheological model in the time domain. Based on the construction of distributed order damper and the idea of distributed order element networks, this paper studies the weighted distributed order operator of the rheological model, a generalization of distributed order linear rheological model. The inverse Laplace transform on weighted distributed order operators of rheological model has been obtained by cutting the complex plane and computing the complex path integral along the Hankel path, which leads to the asymptotic property and boundary discussions. The relaxation response to weighted distributed order rheological model is analyzed, and it is closely related to many physical phenomena. A number of novel characteristics of weighted distributed order rheological model, such as power-law decay and intermediate phenomenon, have been discovered as well. And meanwhile several illustrated examples play important role in validating these results.
Ho, C.K.; Altman, S.J.; Arnold, B.W.
1995-09-01
Groundwater travel time (GWTT) calculations will play an important role in addressing site-suitability criteria for the potential high-level nuclear waste repository at Yucca Mountain,Nevada. In support of these calculations, Preliminary assessments of the candidate codes and models are presented in this report. A series of benchmark studies have been designed to address important aspects of modeling flow through fractured media representative of flow at Yucca Mountain. Three codes (DUAL, FEHMN, and TOUGH 2) are compared in these benchmark studies. DUAL is a single-phase, isothermal, two-dimensional flow simulator based on the dual mixed finite element method. FEHMN is a nonisothermal, multiphase, multidimensional simulator based primarily on the finite element method. TOUGH2 is anon isothermal, multiphase, multidimensional simulator based on the integral finite difference method. Alternative conceptual models of fracture flow consisting of the equivalent continuum model (ECM) and the dual permeability (DK) model are used in the different codes.
Recommendations for computer modeling codes to support the UMTRA groundwater restoration project
Tucker, M.D.; Khan, M.A.
1996-04-01
The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
NASA Astrophysics Data System (ADS)
Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.
2016-05-01
Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
NASA Astrophysics Data System (ADS)
Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.
2016-04-01
Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.
ABAREX -- A neutron spherical optical-statistical-model code -- A user`s manual
Smith, A.B.; Lawson, R.D.
1998-06-01
The contemporary version of the neutron spherical optical-statistical-model code ABAREX is summarized with the objective of providing detailed operational guidance for the user. The physical concepts involved are very briefly outlined. The code is described in some detail and a number of explicit examples are given. With this document one should very quickly become fluent with the use of ABAREX. While the code has operated on a number of computing systems, this version is specifically tailored for the VAX/VMS work station and/or the IBM-compatible personal computer.
Slaughter, D.
1985-03-01
A computer code is described which estimates the energy spectrum or ''line-shape'' for the charged particles and ..gamma..-rays produced by the fusion of low-z ions in a hot plasma. The simulation has several ''built-in'' ion velocity distributions characteristic of heated plasmas and it also accepts arbitrary speed and angular distributions although they must all be symmetric about the z-axis. An energy spectrum of one of the reaction products (ion, neutron, or ..gamma..-ray) is calculated at one angle with respect to the symmetry axis. The results are shown in tabular form, they are plotted graphically, and the moments of the spectrum to order ten are calculated both with respect to the origin and with respect to the mean.
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are
NASA Astrophysics Data System (ADS)
Segura, Christopher L.
Numerical simulation tools capable of modeling nonlinear material and geometric behavior are important to structural engineers concerned with approximating the strength and deformation capacity of a structure. While structures are typically designed to behave linear elastic when subjected to building code design loads, exceedance of the linear elastic range is often an important consideration, especially with regards to structural response during hazard level events (i.e. earthquakes, hurricanes, floods), where collapse prevention is the primary goal. This thesis addresses developments made to Mercury, a nonlinear finite element program developed in MATLAB for numerical simulation and in C++ for real time hybrid simulation. Developments include the addition of three new constitutive models to extend Mercury's lumped plasticity modeling capabilities, a constitutive driver tool for testing and implementing Mercury constitutive models, and Mercury pre and post-processing tools. Mercury has been developed as a tool for transient analysis of distributed plasticity models, offering accurate nonlinear results on the material level, element level, and structural level. When only structural level response is desired (collapse prevention), obtaining material level results leads to unnecessarily lengthy computational time. To address this issue in Mercury, lumped plasticity capabilities are developed by implementing two lumped plasticity flexural response constitutive models and a column shear failure constitutive model. The models are chosen for implementation to address two critical issues evident in structural testing: column shear failure and strength and stiffness degradation under reverse cyclic loading. These tools make it possible to model post-peak behavior, capture strength and stiffness degradation, and predict global collapse. During the implementation process, a need was identified to create a simple program, separate from Mercury, to simplify the process of
Improvements of the Radiation Code "MstrnX" in AORI/NIES/JAMSTEC Models
NASA Astrophysics Data System (ADS)
Sekiguchi, M.; Suzuki, K.; Takemura, T.; Watanabe, M.; Ogura, T.
2015-12-01
There is a large demand for an accurate yet rapid radiation transfer scheme accurate for general climate models. The broadband radiative transfer code "mstrnX", ,which was developed by Atmosphere and Ocean Research Institute (AORI) and was implemented in several global and regional climate models cooperatively developed in the Japanese research community, for example, MIROC (the Model for Interdisciplinary Research on Climate) [Watanabe et al., 2010], NICAM (Non-hydrostatic Icosahedral Atmospheric Model) [Satoh et al, 2008], and CReSS (Cloud Resolving Storm Simulator) [Tsuboki and Sakakibara, 2002]. In this study, we improve the gas absorption process and the scattering process of ice particles. For update of gas absorption process, the absorption line database is replaced by the latest versions of the Harvard-Smithsonian Center, HITRAN2012. An optimization method is adopted in mstrnX to decrease the number of integration points for the wavenumber integration using the correlated k-distribution method and to increase the computational efficiency in each band. The integration points and weights of the correlated k-distribution are optimized for accurate calculation of the heating rate up to altitude of 70 km. For this purpose we adopted a new non-linear optimization method of the correlated k-distribution and studied an optimal initial condition and the cost function for the non-linear optimization. It is known that mstrnX has a considerable bias in case of quadrapled carbon dioxide concentrations [Pincus et al., 2015], however, the bias is decreased by this improvement. For update of scattering process of ice particles, we adopt a solid column as an ice crystal habit [Yang et al., 2013]. The single scattering properties are calculated and tabulated in advance. The size parameter of this table is ranged from 0.1 to 1000 in mstrnX, we expand the maximum to 50000 in order to correspond to large particles, like fog and rain drop. Those update will be introduced to
NASA Astrophysics Data System (ADS)
Yoshida, Hiroyuki; Misawa, Takeharu; Takase, Kazuyuki
Two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method or particle interaction method. Therefore, two-fluid model is useful for thermal hydraulic analysis in large-scale domain such as a rod bundle. Japan Atomic Energy Agency (JAEA) develops three dimensional two-fluid model analysis code ACE-3D that adopts boundary fitted coordinate system in order to simulate complex shape flow channel. In this paper, boiling two-phase flow analysis in a tight-lattice rod bundle was performed by ACE-3D code. The parallel computation using 126 CPUs was applied to this analysis. In the results, the void fraction, which distributes in outermost region of rod bundle, is lower than that in center region of rod bundle. The tendency of void fraction distribution agreed with the measurement results by neutron radiography qualitatively. To evaluate effects of two-phase flow model used in ACE-3D code, numerical simulation of boiling two-phase in tight-lattice rod bundle with no lift force model was also performed. From the comparison of calculated results, it was concluded that the effects of lift force model were not so large for overall void fraction distribution of tight-lattice rod bundle. However, the lift force model is important for local void fraction distribution of fuel bundles.
Modeling of tungsten transport in the linear plasma device PSI-2 with the 3D Monte-Carlo code ERO
NASA Astrophysics Data System (ADS)
Marenkov, E.; Eksaeva, A.; Borodin, D.; Kirschner, A.; Laengner, M.; Kurnaev, V.; Kreter, A.; Coenen, J. W.; Rasinski, M.
2015-08-01
The ERO code was modified for modeling of plasma-surface interactions and impurities transport in the PSI-2 installation. Results of experiments on tungsten target irradiation with argon plasma were taken as a benchmark for the new version of the code. Spectroscopy data modeled with the code are in good agreement with experimental ones. Main factors contributing to observed discrepancies are discussed.
A simple model for induction core voltage distributions
Briggs, Richard J.; Fawley, William M.
2004-07-01
In fall 2003 T. Hughes of MRC used a full EM simulation code (LSP) to show that the electric field stress distribution near the outer radius of the longitudinal gaps between the four Metglas induction cores is very nonuniform in the original design of the DARHT-2 accelerator cells. In this note we derive a simple model of the electric field distribution in the induction core region to provide physical insights into this result. The starting point in formulating our model is to recognize that the electromagnetic fields in the induction core region of the DARHT-2 accelerator cells should be accurately represented within a quasi-static approximation because the timescale for the fields to change is much longer than the EM wave propagation time. The difficulty one faces is the fact that the electric field is a mixture of both a ''quasi-magnetostatic field'' (having a nonzero curl, with Bdot the source) and a ''quasi-electrostatic field'' (the source being electric charges on the various metal surfaces). We first discuss the EM field structure on the ''micro-scale'' of individual tape windings in Section 2. The insights from that discussion are then used to formulate a ''macroscopic'' description of the fields inside an ''equivalent homogeneous tape wound core region'' in Section 3. This formulation explicitly separates the nonlinear core magnetics from the quasi-electrostatic components of the electric field. In Section 4 a physical interpretation of the radial dependence of the electrostatic component of the electric field derived from this model is presented in terms of distributed capacitances, and the voltage distribution from gap to gap is related to various ''equivalent'' lumped capacitances. Analytic solutions of several simple multi-core cases are presented in Sections 5 and 6 to help provide physical insight into the effect of various proposed changes in the geometrical parameters of the DARHT-2 accelerator cell. Our results show that over most of the gap
Carbajo, Juan; Jeong, Hae-Yong; Wigeland, Roald; Corradini, Michael; Schmidt, Rodney Cannon; Thomas, Justin; Wei, Tom; Sofu, Tanju; Ludewig, Hans; Tobita, Yoshiharu; Ohshima, Hiroyuki; Serre, Frederic
2011-06-01
This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the
Radiation transport phenomena and modeling. Part A: Codes; Part B: Applications with examples
Lorence, L.J. Jr.; Beutler, D.E.
1997-09-01
This report contains the notes from the second session of the 1997 IEEE Nuclear and Space Radiation Effects Conference Short Course on Applying Computer Simulation Tools to Radiation Effects Problems. Part A discusses the physical phenomena modeled in radiation transport codes and various types of algorithmic implementations. Part B gives examples of how these codes can be used to design experiments whose results can be easily analyzed and describes how to calculate quantities of interest for electronic devices.
Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included
ERIC Educational Resources Information Center
Evans, Michael A.; Feenstra, Eliot; Ryon, Emily; McNeill, David
2011-01-01
Our research aims to identify children's communicative strategies when faced with the task of solving a geometric puzzle in CSCL contexts. We investigated how to identify and trace "distributed cognition" in problem-solving interactions based on discursive cohesion to objects, participants, and prior discursive content, and geometric and…
1991-01-25
Version 00 TPHEX calculates the multigroup neutron flux distribution in an assembly of hexagonal cells using a transmission probability (interface current) method. It is primarily intended for calculations on hexagonal LWR fuel assemblies but can be used for other purposes subject to the qualifications mentioned in Restrictions/Limitations.
Sharing phenotypic data: a coding system and a developmental model
Technology Transfer Automated Retrieval System (TEKTRAN)
Medicago truncatula is used worldwide as a model legume plant. A striking number of papers from numerous laboratories have been published on M. truncatula genomics. Topics range from whole genome transcript profiling to molecular mapping of traits. However, a detailed growth analysis has not been pe...
A distribution model for the aerial application of granular agricultural particles
NASA Technical Reports Server (NTRS)
Fernandes, S. T.; Ormsbee, A. I.
1978-01-01
A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.
Stochastic Models for the Distribution of Index Terms.
ERIC Educational Resources Information Center
Nelson, Michael J.
1989-01-01
Presents a probability model of the occurrence of index terms used to derive discrete distributions which are mixtures of Poisson and negative binomial distributions. These distributions give better fits than the simpler Zipf distribution, have the advantage of being more explanatory, and can incorporate a time parameter if necessary. (25…
Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model
NASA Astrophysics Data System (ADS)
Baraka, Suleiman
2016-06-01
In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.
Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code
Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T
1985-04-01
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.
Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code
NASA Technical Reports Server (NTRS)
Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William
2006-01-01
The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.
The Nuremberg Code subverts human health and safety by requiring animal modeling
2012-01-01
Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234
Potential capabilities of Reynolds stress turbulence model in the COMMIX-RSM code
NASA Technical Reports Server (NTRS)
Chang, F. C.; Bottoni, M.
1994-01-01
A Reynolds stress turbulence model has been implemented in the COMMIX code, together with transport equations describing turbulent heat fluxes, variance of temperature fluctuations, and dissipation of turbulence kinetic energy. The model has been verified partially by simulating homogeneous turbulent shear flow, and stable and unstable stratified shear flows with strong buoyancy-suppressing or enhancing turbulence. This article outlines the model, explains the verifications performed thus far, and discusses potential applications of the COMMIX-RSM code in several domains, including, but not limited to, analysis of thermal striping in engineering systems, simulation of turbulence in combustors, and predictions of bubbly and particulate flows.
Modeling Code Is Helping Cleveland Develop New Products
NASA Technical Reports Server (NTRS)
1998-01-01
Master Builders, Inc., is a 350-person company in Cleveland, Ohio, that develops and markets specialty chemicals for the construction industry. Developing new products involves creating many potential samples and running numerous tests to characterize the samples' performance. Company engineers enlisted NASA's help to replace cumbersome physical testing with computer modeling of the samples' behavior. Since the NASA Lewis Research Center's Structures Division develops mathematical models and associated computation tools to analyze the deformation and failure of composite materials, its researchers began a two-phase effort to modify Lewis' Integrated Composite Analyzer (ICAN) software for Master Builders' use. Phase I has been completed, and Master Builders is pleased with the results. The company is now working to begin implementation of Phase II.
The values distribution in a competing shares financial market model
NASA Astrophysics Data System (ADS)
Ponzi, A.; Aizawa, Y.
2000-06-01
We present our competing shares financial market model and describe it's behavior by numerical simulation. We show that in the critical region the distribution avalanches of the market value as defined in this model has a power-law distribution with exponent around 2.3. In this region the price returns distribution is truncated Levy stable. .
Atomic hydrogen distribution. [in Titan atmospheric model
NASA Technical Reports Server (NTRS)
Tabarie, N.
1974-01-01
Several possible H2 vertical distributions in Titan's atmosphere are considered with the constraint of 5 km-A a total quantity. Approximative calculations show that hydrogen distribution is quite sensitive to two other parameters of Titan's atmosphere: the temperature and the presence of other constituents. The escape fluxes of H and H2 are also estimated as well as the consequent distributions trapped in the Saturnian system.
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems. PMID:25185747
The Role of Coding Time in Estimating and Interpreting Growth Curve Models.
ERIC Educational Resources Information Center
Biesanz, Jeremy C.; Deeb-Sossa, Natalia; Papadakis, Alison A.; Bollen, Kenneth A.; Curran, Patrick J.
2004-01-01
The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent. The authors develop a general framework that includes predictors of growth curve components to illustrate how parameter estimates and their standard errors are exactly determined as a function of…
A new distributed computing model of mobile spatial information service grid based on mobile agent
NASA Astrophysics Data System (ADS)
Tian, Gen; Liu, Miao-long
2009-10-01
A new distributed computing model of mobile spatial information service is studied based on grid computing environment. Key technologies are presented in the model, including mobile agent (MA) distributed computing, grid computing, spatial data model, location based service (LBS), global positioning system (GPS), code division multiple access (CDMA), transfer control protocol/internet protocol (TCP/IP), and user datagram protocol (UDP). In order to deal with the narrow bandwidth and instability of the wireless internet, distributed organization of tremendous spatial data, limited processing speed and low memory of mobile devices, a new mobile agent based mobile spatial information service grid (MSISG) architecture is further proposed that has good load balance, high processing efficiency, less network communication and thus suitable for mobile distributed computing environment. It can provide applications of spatial information distributed computing and mobile service. The theories and technologies architecture of MSISG are built originally from the base, including spatial information mobile agent model, distributed grid geographic information system (GIS) server model, mobile agent server model and mobile GIS client model. An application system for MSISG is therefore developed authorship by visual c++ and embedded visual c++. A field test is carried out through this system in Shanghai, and the results show that the proposed model and methods are feasible and adaptable for mobile spatial information service.
A new distributed computing model of mobile spatial information service grid based on mobile agent
NASA Astrophysics Data System (ADS)
Tian, Gen; Liu, Miao-long
2008-10-01
A new distributed computing model of mobile spatial information service is studied based on grid computing environment. Key technologies are presented in the model, including mobile agent (MA) distributed computing, grid computing, spatial data model, location based service (LBS), global positioning system (GPS), code division multiple access (CDMA), transfer control protocol/internet protocol (TCP/IP), and user datagram protocol (UDP). In order to deal with the narrow bandwidth and instability of the wireless internet, distributed organization of tremendous spatial data, limited processing speed and low memory of mobile devices, a new mobile agent based mobile spatial information service grid (MSISG) architecture is further proposed that has good load balance, high processing efficiency, less network communication and thus suitable for mobile distributed computing environment. It can provide applications of spatial information distributed computing and mobile service. The theories and technologies architecture of MSISG are built originally from the base, including spatial information mobile agent model, distributed grid geographic information system (GIS) server model, mobile agent server model and mobile GIS client model. An application system for MSISG is therefore developed authorship by visual c++ and embedded visual c++. A field test is carried out through this system in Shanghai, and the results show that the proposed model and methods are feasible and adaptable for mobile spatial information service.
Assessment of Turbulence-Chemistry Interaction Models in the National Combustion Code (NCC) - Part I
NASA Technical Reports Server (NTRS)
Wey, Thomas Changju; Liu, Nan-suey
2011-01-01
This paper describes the implementations of the linear-eddy model (LEM) and an Eulerian FDF/PDF model in the National Combustion Code (NCC) for the simulation of turbulent combustion. The impacts of these two models, along with the so called laminar chemistry model, are then illustrated via the preliminary results from two combustion systems: a nine-element gas fueled combustor and a single-element liquid fueled combustor.
Diverse and pervasive subcellular distributions for both coding and long noncoding RNAs.
Wilk, Ronit; Hu, Jack; Blotsky, Dmitry; Krause, Henry M
2016-03-01
In a previous analysis of 2300 mRNAs via whole-mount fluorescent in situ hybridization in cellularizing Drosophila embryos, we found that 70% of the transcripts exhibited some form of subcellular localization. To see whether this prevalence is unique to early Drosophila embryos, we examined ∼8000 transcripts over the full course of embryogenesis and ∼800 transcripts in late third instar larval tissues. The numbers and varieties of new subcellular localization patterns are both striking and revealing. In the much larger cells of the third instar larva, virtually all transcripts observed showed subcellular localization in at least one tissue. We also examined the prevalence and variety of localization mechanisms for >100 long noncoding RNAs. All of these were also found to be expressed and subcellularly localized. Thus, subcellular RNA localization appears to be the norm rather than the exception for both coding and noncoding RNAs. These results, which have been annotated and made available on a recompiled database, provide a rich and unique resource for functional gene analyses, some examples of which are provided. PMID:26944682
Diverse and pervasive subcellular distributions for both coding and long noncoding RNAs
Wilk, Ronit; Hu, Jack; Blotsky, Dmitry; Krause, Henry M.
2016-01-01
In a previous analysis of 2300 mRNAs via whole-mount fluorescent in situ hybridization in cellularizing Drosophila embryos, we found that 70% of the transcripts exhibited some form of subcellular localization. To see whether this prevalence is unique to early Drosophila embryos, we examined ∼8000 transcripts over the full course of embryogenesis and ∼800 transcripts in late third instar larval tissues. The numbers and varieties of new subcellular localization patterns are both striking and revealing. In the much larger cells of the third instar larva, virtually all transcripts observed showed subcellular localization in at least one tissue. We also examined the prevalence and variety of localization mechanisms for >100 long noncoding RNAs. All of these were also found to be expressed and subcellularly localized. Thus, subcellular RNA localization appears to be the norm rather than the exception for both coding and noncoding RNAs. These results, which have been annotated and made available on a recompiled database, provide a rich and unique resource for functional gene analyses, some examples of which are provided. PMID:26944682
Wohlin, Åsa
2015-03-21
The distribution of codons in the nearly universal genetic code is a long discussed issue. At the atomic level, the numeral series 2x(2) (x=5-0) lies behind electron shells and orbitals. Numeral series appear in formulas for spectral lines of hydrogen. The question here was if some similar scheme could be found in the genetic code. A table of 24 codons was constructed (synonyms counted as one) for 20 amino acids, four of which have two different codons. An atomic mass analysis was performed, built on common isotopes. It was found that a numeral series 5 to 0 with exponent 2/3 times 10(2) revealed detailed congruency with codon-grouped amino acid side-chains, simultaneously with the division on atom kinds, further with main 3rd base groups, backbone chains and with codon-grouped amino acids in relation to their origin from glycolysis or the citrate cycle. Hence, it is proposed that this series in a dynamic way may have guided the selection of amino acids into codon domains. Series with simpler exponents also showed noteworthy correlations with the atomic mass distribution on main codon domains; especially the 2x(2)-series times a factor 16 appeared as a conceivable underlying level, both for the atomic mass and charge distribution. Furthermore, it was found that atomic mass transformations between numeral systems, possibly interpretable as dimension degree steps, connected the atomic mass of codon bases with codon-grouped amino acids and with the exponent 2/3-series in several astonishing ways. Thus, it is suggested that they may be part of a deeper reference system. PMID:25623487
Modeling Soil Moisture Fields Using the Distributed Hydrologic Model MOBIDIC
NASA Astrophysics Data System (ADS)
Castillo, A. E.; Entekhabi, D.; Castelli, F.
2011-12-01
The Modello Bilancio Idrologico DIstributo e Continuo (MOBIDIC) is a fully-distributed physically-based basin hydrologic model [Castelli et al., 2009]. MOBIDIC represents watersheds using a system or reservoirs that interact through both mass and energy fluxes. The model uses a single-layered soil on a grid. For each grid element, soil moisture is conceptually partitioned into gravitational (free) and capillary-bound water. For computational parsimony, linear parameterization is used for infiltration rather than solving it using the nonlinear Richard's Equation. Previous applications of MOBIDIC assessed model performance based on streamflow which is a flux. In this study, the MOBIDIC simulated soil moisture, a state variable, is compared against observed values as well as values simulated by the legacy Simultaneous Heat and Water (SHAW) model [Flerchinger, 2000] which was chosen as the benchmark. Results of initial simulations with the original version of MOBIDIC prompted several model modifications such as changing the parameterization of evapotranspiration and adding capillary rise to make the model more robust in simulating the dynamics of soil moisture. In order to test the performance of the modified MOBIDIC, both short-term (a few weeks) and extended (multi-year) simulations were performed for 3 well-studied sites in the US: two sites are mountainous with deep groundwater table and semiarid climate, while the third site is fluvial with shallow groundwater table and temperate climate. For the multi-year simulations, both MOBIDIC and SHAW performed well in modeling the daily observed soil moisture. The simulations also illustrated the benefits of adding the capillary rise module and the other modifications introduced. Moreover, it was successfully demonstrated that MOBIDIC, with some conceptual approaches and some simplified parameterizations, can perform as good, if not better, than the more sophisticated SHAW model. References Castelli, F., G. Menduni, and B
A Hierarchical Model for Distributed Seismicity
NASA Astrophysics Data System (ADS)
Tejedor, A.; Gomez, J. B.; Pacheco, A. F.
2009-04-01
maximum earthquake magnitude expected in the simulated zone. The model has two parameters, c and u. Parameter c, called the coordination number, is a geometric parameter. It represents the number of boxes in a level m connected to a box in level m + 1; parameter u is the fraction of load that rises in the hierarchy due to a relaxation process. Therefore, the fraction 1 - u corresponds to the load that descends in the same process. The only two parameters of the model are fixed taking into account three characteristics of natural seismicity: (i) the power-law relationship between the size of an earthquake and the area of the displaced fault; (ii) the fact, observed in Geology, that the time of recurrence of large faults is shorter than that of small faults; and (iii) the percentages of aftershocks and mainshocks observed in earthquake catalogs. The model shows a self-organized critical behavior. It becomes manifest from both the observation of a steady state around which the load fluctuates, and the power law behavior of some of the properties of the system like the size-frequency distribution of relaxations (earthquakes). The exponent of this power law is around -1 for values of the parameters consistent with the three previous phenomenological observations. Two different strategies for the forecasting of the largest earthquakes in the model have been analyzed. The first one only takes into account the average recurrence time of the target earhquakes, whereas the second utilizes a known precursory pattern, the burst of aftershocks, which has been used for real earthquake prediction. The application of the latter strategy improves significantly the results obtained with the former. In summary, a conceptually simple model of the cellular automaton type with only two parameters can reproduce simultaneously several characteristics of real seismicity, like the Gutenberg-Richter law, shorter recurrence times for big faults compare to small ones, and percentages of aftershocks
Flash flood modeling with the MARINE hydrological distributed model
NASA Astrophysics Data System (ADS)
Estupina-Borrell, V.; Dartus, D.; Ababou, R.
2006-11-01
Flash floods are characterized by their violence and the rapidity of their occurrence. Because these events are rare and unpredictable, but also fast and intense, their anticipation with sufficient lead time for warning and broadcasting is a primary subject of research. Because of the heterogeneities of the rain and of the behavior of the surface, spatially distributed hydrological models can lead to a better understanding of the processes and so on they can contribute to a better forecasting of flash flood. Our main goal here is to develop an operational and robust methodology for flash flood forecasting. This methodology should provide relevant data (information) about flood evolution on short time scales, and should be applicable even in locations where direct observations are sparse (e.g. absence of historical and modern rainfalls and streamflows in small mountainous watersheds). The flash flood forecast is obtained by the physically based, space-time distributed hydrological model "MARINE'' (Model of Anticipation of Runoff and INondations for Extreme events). This model is presented and tested in this paper for a real flash flood event. The model consists in two steps, or two components: the first component is a "basin'' flood module which generates flood runoff in the upstream part of the watershed, and the second component is the "stream network'' module, which propagates the flood in the main river and its subsidiaries. The basin flash flood generation model is a rainfall-runoff model that can integrate remotely sensed data. Surface hydraulics equations are solved with enough simplifying hypotheses to allow real time exploitation. The minimum data required by the model are: (i) the Digital Elevation Model, used to calculate slopes that generate runoff, it can be issued from satellite imagery (SPOT) or from French Geographical Institute (IGN); (ii) the rainfall data from meteorological radar, observed or anticipated by the French Meteorological Service (M
Modeling non-local thermodynamic equilibrium plasma using the Flexible Atomic Code data
NASA Astrophysics Data System (ADS)
Han, Bo; Wang, Feilu; Salzmann, David; Zhao, Gang
2015-04-01
We present a new code, RCF ("Radiative-Collisional code based on FAC"), which is used to simulate steady-state plasmas under non-local thermodynamic equilibrium condition, especially photoinization-dominated plasmas. RCF takes almost all of the radiative and collisional atomic processes into a rate equation to interpret the plasmas systematically. The Flexible Atomic Code (FAC) supplies all the atomic data needed for RCF, which insures calculating completeness and consistency of atomic data. With four input parameters relating to the radiation source and target plasma, RCF calculates the population of levels and charge states, as well as potential emission spectrum. In a preliminary application, RCF successfully reproduced the results of a photoionization experiment with reliable atomic data. The effects of the most important atomic processes on the charge state distribution are also discussed.
Procedural Code Generation vs Static Expansion in Modelling Languages for Constraint Programming
NASA Astrophysics Data System (ADS)
Martin, Julien; Martinez, Thierry; Fages, François
To make constraint programming easier to use by the non-programmers, a lot of work has been devoted to the design of front-end modelling languages using logical and algebraic notations instead of programming constructs. The transformation to an executable constraint program can be performed by fundamentally two compilation schemas: either by a static expansion of the model in a flat constraint satisfaction problem (e.g. Zinc, Rules2CP, Essence) or by generation of procedural code (e.g. OPL, Comet). In this paper, we compare both compilation schemas. For this, we consider the rule-based modelling language Rules2CP with its static exansion mechanism and describe with a formal system a new compilation schema which proceeds by generation of procedural code. We analyze the complexity of both compilation schemas, and present some performance figures of both the compilation process and the generated code on a benchmark of scheduling and bin packing problems.
Implementation of an anomalous radial transport model for continuum kinetic edge codes
NASA Astrophysics Data System (ADS)
Bodi, K.; Krasheninnikov, S. I.; Cohen, R. H.; Rognlien, T. D.
2007-11-01
Radial plasma transport in magnetic fusion devices is often dominated by plasma turbulence compared to neoclassical collisional transport. Continuum kinetic edge codes [such as the (2d,2v) transport version of TEMPEST and also EGK] compute the collisional transport directly, but there is a need to model the anomalous transport from turbulence for long-time transport simulations. Such a model is presented and results are shown for its implementation in the TEMPEST gyrokinetic edge code. The model includes velocity-dependent convection and diffusion coefficients expressed as a Hermite polynominals in velocity. The specification of the Hermite coefficients can be set, e.g., by specifying the ratio of particle and energy transport as in fluid transport codes. The anomalous transport terms preserve the property of no particle flux into unphysical regions of velocity space. TEMPEST simulations are presented showing the separate control of particle and energy anomalous transport, and comparisons are made with neoclassical transport also included.
MCNP(TM) Release 6.1.1 beta: Creating and Testing the Code Distribution
Cox, Lawrence J.; Casswell, Laura
2014-06-12
This report documents the preparations for and testing of the production release of MCNP6™1.1 beta through RSICC at ORNL. It addresses tests on supported operating systems (Linux, MacOSX, Windows) with the supported compilers (Intel, Portland Group and gfortran). Verification and Validation test results are documented elsewhere. This report does not address in detail the overall packaging of the distribution. Specifically, it does not address the nuclear and atomic data collection, the other included software packages (MCNP5, MCNPX and MCNP6) and the collection of reference documents.
NASA Astrophysics Data System (ADS)
Das, Debottam; Ellwanger, Ulrich; Teixeira, Ana M.
2012-03-01
The code NMSDECAY allows to compute widths and branching ratios of sparticle decays in the Next-to-Minimal Supersymmetric Standard Model. It is based on a generalization of SDECAY, to include the extended Higgs and neutralino sectors of the NMSSM. Slepton 3-body decays, possibly relevant in the case of a singlino-like lightest supersymmetric particle, have been added. NMSDECAY will be part of the NMSSMTools package, which computes Higgs, sparticle masses and Higgs decays in the NMSSM. Program summaryProgram title: NMSDECAY Catalogue identifier: AELC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 188 177 No. of bytes in distributed program, including test data, etc.: 1 896 478 Distribution format: tar.gz Programming language: FORTRAN77 Computer: All supporting g77, gfortran, ifort Operating system: All supporting g77, gfortran, ifort Classification: 11.1 External routines: Routines in the NMSSMTools package: At least one of the routines in the directory main (e.g. nmhdecay.f), all routines in the directory sources. (All software is included in the distribution package.) Nature of problem: Calculation of all decay widths and decay branching fractions of all particles in the Next-to-Minimal Supersymmetric Standard Model. Solution method: Suitable generalization of the code SDECAY [1] including the extended Higgs and neutralino sector of the Next-to-Minimal Supersymmetric Standard Model, and slepton 3-body decays. Additional comments: NMSDECAY is interfaced with NMSSMTools, available on the web page http://www.th.u-psud.fr/NMHDECAY/nmssmtools.html. Running time: On an Intel Core i7 with 2.8 GHZ: about 2 seconds per point in parameter space, if all flags flagqcd, flagmulti and flagloop are switched on.